Explore chapters and articles related to this topic
Applications to Data Science
Published in Suman Saha, Shailendra Shukla, Advanced Data Structures, 2019
SVD calculation is interesting from a theoretical view because it provides the closest matrix of a given rank. For many applications where the data matrix is large, the SVD calculation can be impractical because it requires a large number of operations and large memory. Recent studies have focused on algorithms which are not optimal in the sense that they compute a lower-grade matrix which is not close to the original matrix. Reported work shows that they have an advantage over SVD based algorithms as they require less memory. Low-rank approximations have various applications like latent semantic indexing, support vector machine training, machine learning, computer vision, and web search models. In these applications, the data consist of a matrix of pairwise distances between the nodes of a complex network and approximated by a low-rank matrix for fast community detection using a distance-based partitioning algorithm. Calculating such a low-rank approximation can reveal the underlying structure of the data and allow for fast computations.
Multivariate Analysis
Published in Shyama Prasad Mukherjee, A Guide to Research Methodology, 2019
Computationally, this technique is equivalent to a low rank approximation to the matrix of observed variables. Factor analysis is related to principal-component analysis, but the two are not identical. Latent variable models including factor analysis use regression models to test hypotheses producing error terms, while PCA is generally an exploratory data analysis tool.
Matrix Completion Methods
Published in Joseph Suresh Paul, Raji Susan Mathew, Regularized Image Reconstruction in Parallel MRI with MATLAB®, 2019
Joseph Suresh Paul, Raji Susan Mathew
The minimization problem in Equation (8.11) is convex and can be expressed as semi-definite programming (SDP) [5]. The solvers for SDP use interior-point methods, which means that they cannot be used for large matrices similar to those encountered in pMRI image reconstruction. This is because of the need for solving large-scale linear equations for determining the Newton direction. Alternately, a low-rank approximation done by truncating the SVD and retaining the part of expansion involving singular values exceeding a threshold τ led to emergence of a general family of algorithms referred to as singular-value thresholding (SVT) methods. This provides a sequence of iterates {Xk} converging to a unique low-rank approximation of the original data matrix M. In these algorithms, the SVT operator Dτ serves as a common tool that stands for application of a soft-thresholding operation at level τ to the singular values of the input matrix: () Dτ(σi)={0,if|σi|<τsgn(σi)(|σi|−τ)if|σi|≥τ.
Solution of the symmetric band partial inverse eigenvalue problem for the damped mass spring system
Published in Inverse Problems in Science and Engineering, 2021
Suman Rakshit, Biswa Nath Datta
Now, low rank approximations of the matrix V can be obtained by its singular value decomposition(SVD). The SVD of the matrix V is where are orthogonal matrices and is a diagonal matrix with non-negative real numbers on the diagonal where and min.
A content-based recommendation approach based on singular value decomposition
Published in Connection Science, 2022
Francesco Colace, Dajana Conte, Massimo De Santo, Marco Lombardi, Domenico Santaniello, Carmine Valentino
The approximation error is estimated by Equations (5) and (6). Theorem 3.1 allows for reducing the computational cost relating to SVD with an estimate of the acceptable error based on Equation (3). The low-rank approximation of matrices is used in many applications such as control theory, signal processing, machine learning, image compression, information retrieval, quantum physics (Conte, 2020; Conte & Lubich, 2010; Koch & Lubich, 2007; Nonnenmacher & Lubich, 2008).
Current and future role of data fusion and machine learning in infrastructure health monitoring
Published in Structure and Infrastructure Engineering, 2023
Hao Wang, Giorgio Barone, Alister Smith
The low-rank approximation approach establishes a compact representation of the data by defining an approximate matrix having a lower rank than the original data matrix, while minimising loss of information (Kishore & Schneider, 2017). Algorithms such as singular value decomposition and principal component analysis can be used to solve the low-rank approximation problem. Table 3 summarises the advantages and limitations of different low-rank approximation algorithms.