Explore chapters and articles related to this topic
Using the Physics of Electron Beam Interactions to Determine Optimal Sampling and Image Reconstruction Strategies for High Resolution STEM
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Nigel D. Browning, B. Layla Mehdi, Daniel Nicholls, Andrew Stevens
Another algorithm for image inpainting is dictionary learning. Dictionary learning attempts to simultaneously learn a dictionary (i.e., sparsifying transform) and the sparse representation. The dictionary can be thought of as a summary of different patterns/textures in the image, and the sparse representation encodes how the patterns are combined to build the image. Algorithms for dictionary learning typically alternate between optimizing the dictionary and the “sparse codes”. Some algorithms use the lasso during the sparse coding step. Most dictionary learning algorithms depend on having all of the pixels, so they typically train on similar datasets, then use the learned dictionary with an algorithm like the lasso for inpainting. This is impractical in STEM where data collection is expensive, and dose issues prevent acquisition of full images. Bayesian Factor Analysis (Stevens 2018; Zhou et al. 2011) solves this by allowing simultaneous dictionary learning and inpainting (and CS recovery). A comparison of Bayesian factor analysis and the Fourier-L1 method are shown in Figure 10.17.
Dictionary Learning
Published in Angshul Majumdar, Compressed Sensing for Engineers, 2018
Dictionary learning is a synthesis formulation; that is, it learns a basis/dictionary along with the coefficients such that the data can be synthesized (Figure 9.3). There can be an alternate formulation where a basis is learnt to analyze the data to produce the coefficients. This is the topic of transform learning. The basic formulation is, () TX=Z
SAR Image Target Recognition Method by Global and Local Dictionary Sparse Representation
Published in Applied Artificial Intelligence, 2023
Hongliang Han, Wei Lu, Fan Feng
Additionally, the strong structural attributes of the dictionary also make the updated rules of the dictionary have a stronger theoretical basis (Shi et al. 2021). In 2006, the K-Singular Value Decomposition (K-SVD) dictionary learning algorithm was proposed. The singular value decomposition technique is employed to resolve the rank approximation problem, and each atom is updated one by one, which simplifies the computational process (Xue et al. 2020). Basis functions like the receptive fields of cells in the primary visual cortex were obtained by utilizing the sparse coding neural gas algorithm (Xu 2021). It was later proposed in the idea of sparse orthogonal transformation. So, images are clustered according to certain characteristics. Then, an orthogonal dictionary is trained separately for each class. The algorithm utilizes an orthogonal dictionary, making forward and inverse transformations easy.
Multi-mode dictionaries for fast CS-based dynamic MRI reconstruction
Published in The Imaging Science Journal, 2023
Minha Mubarak, Thomas James Thomas, Sheeba Rani J, Deepak Mishra
The disadvantages of fixed transforms was discussed in [1] , as a result of which most methods exploit adaptive transforms which are obtained from dictionary learning. Dictionary learning is based on the paradigm of sparse modelling, which states that natural signals can be approximated as a linear combination of few basic signal components referred to as atoms. The collection of atoms arranged column-wise to form a redundant matrix is referred to as the dictionary and the process of learning the model or dictionary is called dictionary learning. Given an image , let be the vector representation of square 2D image patch of size indexed by the location of its top-left corner in the image. A dictionary can be used to approximate as [3]:
A novel data representation framework based on nonnegative manifold regularisation
Published in Connection Science, 2021
Yan Jiang, Wei Liang, Jintian Tang, Hongbo Zhou, Kuan-Ching Li, Jean-Luc Gaudiot
Dictionary learning represents data as sparse combinations of atoms from an overcomplete dictionary (Bao et al., 2015). Sparse coding techniques have been widely used in signal processing, image in painting and image clustering. Given a sample dataset , sparse coding aims to find an overcomplete dictionary , is the ith column vector of such that can be well approximated by a linear combination of and the coefficients in are newly learned representation of original samples. The regulariser in Equation (4) guarantees most coefficients in will shrinkage to zero.