Explore chapters and articles related to this topic
Matrix Completion Methods
Published in Joseph Suresh Paul, Raji Susan Mathew, Regularized Image Reconstruction in Parallel MRI with MATLAB®, 2019
Joseph Suresh Paul, Raji Susan Mathew
The area of matrix completion is a recent field of study following the track of what has been explored in fields related to compressed sensing (CS). Matrix completion deals with the recovery of a data matrix that is incompletely filled, with the known samples corrupted with noise or perturbations. As in CS, matrix completion algorithms therefore involve reconstruction of the data matrix from a small subset of its noise-corrupted entries. The missing entries can be recovered under certain conditions when the data matrix has a low rank [1]. The conditions mainly stipulate that the number of available entries fall below a certain limit and that some of the rows and columns of the matrix are completely unknown. A large body of research is currently trying to focus on improving the accuracy of reconstruction of the unknown entries for a low-rank matrix with the rank a priori unknown.
Matrix Completion Problems
Published in Leslie Hogben, Richard Brualdi, Anne Greenbaum, Roy Mathias, Handbook of Linear Algebra, 2006
Matrix completion problems arise in applications whenever a full set of data is not available, but it is known that the full matrix of data must have certain properties. Such applications include molecular biology and chemistry (see Chapter 60), seismic reconstruction problems, mathematical programming, and data transmission, coding, and image enhancement problems in electrical and computer engineering.
Image inpainting via Smooth Tucker decomposition and Low-rank Hankel constraint
Published in International Journal of Computers and Applications, 2023
Jing Cai, Jiawei Jiang, Yibing Wang, Jianwei Zheng, Honghui Xu
The matrix completion theory assumes that the original data can be accurately reconstructed from a small amount of incoherent elements. The involved high-level idea is to establish and utilize reasonable correlations between different elements. Generally speaking, without any assumptions, image inpainting is an ill-posed problem. However, it has been indicated that image possesses the underlying characteristic, i.e. low-rank (LR) [15–17], smoothness [18,19], or sparsity [20], that make matrix completion theoretically possible. In [18], the authors suggested to fill each color-shaded channel respectively, followed by a concatenation operation for the final outcome. Following a unified framework for highly effective image restoration applications, Zha et al. [21] proposed a new low-rankness guided group sparse representation (LGSR) model, which jointly utilizes the sparsity and LR priors of each group of similar patches. By using the truncated-quadratic loss function as well as the non-convex and non-smooth priors, Wang et al. [22] proposed a robust matrix completion model. Zheng et al. [23] proposed a weighted nonlocal second-order model, which provides better approximation to the real-world image distribution and manifests both the properties of smoothness and low dimensionality. However, these approaches ignore the intrinsic multidimensional structure and the underlying correlation information in tensors.
A Bregman stochastic method for nonconvex nonsmooth problem beyond global Lipschitz gradient continuity
Published in Optimization Methods and Software, 2023
Matrix completion problem has shown its power in many practical real world applications, such as recommender systems [31], signal processing [28] and etc. Suppose that is a low-rank matrix with . The target is to recover M from a few observed entries. Let index the observed entries. This is cast into the following optimization problem, where the linear operator is defined as: if and 0 if . This model can be covered by the optimization problem (1), i.e. .
Generalized Principal Component Analysis: Projection of Saturated Model Parameters
Published in Technometrics, 2020
Andrew J. Landgraf, Yoonkyung Lee
Filling in the missing values of the training matrix is a typical matrix completion problem. Using the projection formulation we have developed, a fitted value for missing xij is given by where is zero since xij is missing and we set . is the identity function for standard PCA and for Poisson PCA. For standard PCA, we set any fitted values below zero to zero, since negative values are infeasible.