Explore chapters and articles related to this topic
Medical Imaging
Published in Angshul Majumdar, Compressed Sensing for Engineers, 2018
In robust principal component analysis (RPCA), the problem is to decompose a signal into its sparse and low-rank components. This was successfully used for background-foreground separation in videos. Let xt be the tth video frame. It can be assumed to be composed of two parts: a slow-moving background and a fast-moving foreground. Also, the foreground typically comprises a much smaller region compared with the background.
A partial PPA block-wise ADMM for multi-block linearly constrained separable convex optimization
Published in Optimization, 2021
Yuan Shen, Xingying Zhang, Xiayang Zhang
In this subsection, we consider the robust principal component analysis (RPCA) problem which aims at recovering a low-rank and a sparse matrices from their sum. This problem arises from various areas such as model selection or image processing, see [31,32]. Specially, the following convex relaxation problem is usually adopted: where is the data matrix. The nuclear norm is a convex surrogate which is to induce the low-rank component of C while the elementwise norm can catch the sparse component of C. It has been verified that, under certain mild assumptions, the model (61) can recover the original solution accurately.
Robust principal component analysis with projection learning for image classification
Published in Journal of Modern Optics, 2020
In addition, researchers also proposed a low-rank representation method to construct a robust graph for data clustering [22–26]. Assuming that the original data space contains a low-rank part and a sparse part, Wright et al. propose the robust principal component analysis (RPCA) [27] to effectively recover the low-rank subspace and robustly correct the error. This is different from the way PCA handles the low-rank subspace problem: If the noise in the data follows a Gaussian distribution and the variance is small, PCA can find the low-rank subspace structure. Otherwise, it fails. Assuming that the original data space consists of a union of multiple low-rank subspaces, the low-rank representation (LRR), proposed by Liu et al., can be regarded as a generalization of the single low-rank problem of RPCA. In addition, Liu et al. also proposed Latent Low-Rank Representation (LatLRR) to solve the subspace segmentation and feature extraction problem in the image with respect to latent detection. However, these methods focus only on learning the global low-rank representation of all samples to construct a graph for data clustering, ignoring the local geometric relationships between samples.
An alternating minimization method for robust principal component analysis
Published in Optimization Methods and Software, 2019
Robust principal component analysis (RPCA) is an important multivariate tool that exploits special structures within data. This method of analysis is increasingly important in applications arising from diverse fields such as statistics, computer vision, signal processing, and data mining or compression [15,26,28,35]. When the data have both low-rank and sparse structures, such as when low-rank structures are contaminated by high-magnitude impulse noise, RPCA can restore the original data while preserving most of its information. For instance, the frames of a surveillance video include a static scenic background and moving objects such as pedestrians and automobiles. Thus, the moving objects represent the sparse components and the background scene relates to the low-rank structure. For a collection of text documents, the low-rank component could capture common words used in all documents, while the sparse component may capture the key words that distinguish one document from another [9]. In facial recognition applications, the face images are taken under varying illumination environments. The underlying face can be represented by a low-dimensional linear subspace [2], while the large-magnitude errors caused by shadows and specularities remain sparse in the spatial domain. In the aforementioned applications, the underlying problem can be characterized as the recovery of a low-rank matrix from large but sparse errors.