Explore chapters and articles related to this topic
Compressed Sensing Theory and Algorithms for Imaging Applications
Published in Jeffrey P. Simmons, Lawrence F. Drummy, Charles A. Bouman, Marc De Graef, Statistical Methods for Materials Science, 2019
Given a direct observation of a vector x0, finding the best sparse approximation (in a least-squares sense) is a simple task. We formulate this problem as follows: given x0, solve
Watermarking and Fingerprinting Techniques for Multimedia Protection
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Sridhar Krishnan, Xiaoli Li, Yaqing Niu, Ngok-Wah Ma, Qin Zhang
The fingerprint as the compacted version of the original file, can be successfully obtained by using the following techniques: (1) Principal component analysis (PCA), a data-driven approach (without involving any external basis) to decompose the data which can be further utilized to narrow down the dimension of the fingerprint by only taking into account the most principal component. (2) Discrete wavelet transform (DWT), a very good method for signal decomposition, is generally based on the description of the signal in a given orthogonal wavelet basis. The orthogonal wavelet coefficients support multiresolution of the signal so that the signal with lower resolution consists of sparser supporting coefficients. (3) Sparse approximation algorithms. Different from the previous two techniques that decompose the signal into weighted orthogonal bases, sparse approximation algorithms find an optimal decomposition of the signal, including a number of elements higher than the dimension of the signal. The collection of such elements is called a redundant dictionary. Among the existing sparse approximation algorithms, l1-optimisation principles (Basis Pursuit [19], least absolute shrinkage and selection operator (LASSO)) and greedy algorithms (e.g., MP [37] and its variants) have in particular been extensively studied and proved to have good decomposition performance [41]. The “basis pursuit” approach minimized the norm l1 of the decomposition resorting to linear programming techniques. The approach is of larger complexity, but the solution obtained yields generally good properties of sparsity, without reaching, however, the optimal solution that would have been obtained by minimizing l0. The “matching pursuit” approach consists in optimizing incrementally the decomposition of the signal, by searching at each stage the element of the dictionary which has the best correlation with the signal to be decomposed, and then by subtracting from the signal the contribution of this element. This greedy algorithm is suboptimal but it has good properties for what concerns the decrease of the error and the flexibility of its implementation.
Non-parametric modelling and simulation of spatiotemporally varying geo-data
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2022
Yu Wang, Yue Hu, Kok-Kwang Phoon
A two-step implementation is generally required for conventional spectral analysis of spatiotemporally varying geo-data. The first step is data acquisition, which is strictly subject to the limit of the Nyquist–Shannon sampling theorem. The second step is to carry out a mathematical transform, e.g. Fourier transform, on the sampled data points from spatiotemporal domain to frequency domain, which is also bounded by Nyquist frequency. In contrast, CS provides a new paradigm that can go against the common wisdom on data acquisition and the limit set by the Nyquist–Shannon sampling theorem. It asserts that a spatiotemporal data profile can be recovered from far fewer sampled data points than the number required by the Nyquist–Shannon sampling theorem, and CS can identify frequency contents higher than the limit of Nyquist frequency. Sparse approximation is the centre of the CS framework that enables the identification and estimation of dominant frequency contents, which might be higher than the Nyquist frequency, from sparsely sampled data with unequal sampling intervals.
An Automatic Classification Method for Multiple Music Genres by Integrating Emotions and Intelligent Algorithms
Published in Applied Artificial Intelligence, 2023
LASSO realizes sparse approximation by converting the sparse approximation problem into a convex optimization problem to solve, and restricts the regression coefficients, that is, to estimate the sparse coefficients, thereby minimizing the reconstruction error . The algorithm adopts quadratic programming in the reconstruction process, which increases the computational complexity and makes the reconstruction very time-consuming. For this purpose, LARS is used to solve the LASSO problem.
Phase coherent noise reduction in digital holographic microscopy based on adaptive non-convex sparse regularization
Published in Journal of Modern Optics, 2022
Liming Gao, Hongqiang Yu, Shuhai Jia, Xing Zhou
In quantitative phase imaging with digital holographic microscopy, the phase image is disturbed by the coherent noise, and the noisy phase image can be modelled as where represents the phase image without noise, and represents the phase coherent noise. The sparse approximation is a useful method to eliminate noise; furthermore, the norm is classically applied as a regularizer (penalty) since it induces sparsity effectively. The sparse approximation model can be expressed to minimize a sparse regularization least squares problem: where is the vector of sparse representation coefficients, the matrix () represents a linear transformation, and balances the sparsity and fidelity of the cost function . In this case, the phase image admits a sparse representation with the linear transformation. Equation (2) is the convex optimization that obtains an approximate solution; and the iterative shrinkage threshold algorithm (ISTA) is a common algorithm to solve it [38]. Algorithm 1 is the example of the ISTA, in which the soft-thresholding operator is given as follows There are two problems when applying the norm as the penalty for phase coherent noise reduction. First, it underestimates the high-amplitude components [37] of the phase images. Second, it cannot adaptively change to balance the sparsity and the fidelity of the cost function. In the next section, the generalized minimax-concave (GMC) penalty is adopted to obtain a more accurate estimation of amplitude components of phase images. Moreover, we proposed an adaptive strategy for selecting sparse representation coefficients.