Explore chapters and articles related to this topic
Basic Computations
Published in Jhareswar Maiti, Multivariate Statistical Modeling in Engineering and Management, 2023
This chapter contains the necessary details of matrix algebra and other computational issues needed to understand multivariate statistical modeling of data. The important concepts are highlighted below. The simple vector and matrix operations help in computing basic descriptive statistics like mean, mean vectors, variance, covariance, and covariance matrix.As orthogonalization of variables is important for many statistical models, vector-based orthogonalization processes are described.Many models like PCA and factor analysis use decomposition of covariance or correlation matrix. Three important matrix decomposition techniques namely eigenvalue-eigenvector, spectral, and singular value decompositions are included in this chapter.For parameter estimation, methods of least squares and maximum likelihood method are described.For generating random numbers from univariate normal distributions, the useful procedures are presented.Finally, two resampling methods, namely jackknife and bootstrap, are described.
Role of Dimensionality Reduction Techniques for Plant Disease Prediction
Published in Utku Kose, V. B. Surya Prasath, M. Rubaiyat Hossain Mondal, Prajoy Podder, Subrato Bharati, Artificial Intelligence and Smart Agriculture Technology, 2022
Muhammad Kashif Hanif, Shaeela Ayesha, Ramzan Talib
Singular value decomposition (SVD) is a linear approach that transforms the high dimensions of data into lower dimensions using matrix decomposition and linear transformation functions. SVD can be applied to almost all datasets that can be processed as a matrix (Wang & Zhu, 2017; Wang et al., 2021). The limitation of SVD is that it can reduce the dimensions of data based on linear projections (Zhang et al., 2018). Singular values (SVs) represent stable, rotation, and ratio invariant features of the image used for disease recognition. Although the SVs of leaf images represents algebraic features that may limit the efficiency of ML models to make precise predictions (Wang et al., 2021). SVD can preserve the essential features of the image that offers good performance for object recognition. Several extensions of SVD have been proposed to improve the efficiency of SVD (Zhang & Wang, 2016; Wang et al., 2021).
Linear Systems
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
We have considered two principal techniques for solving Ax=b: The LU decomposition and the QR decomposition. In the next chapter we will discuss iterative methods for this problem. But there is one other major matrix decomposition technique: The singular value decomposition (SVD) A=UΣVT
Digital twins in human understanding: a deep learning-based method to recognize personality traits
Published in International Journal of Computer Integrated Manufacturing, 2021
Jianshan Sun, Zhiqiang Tian, Yelin Fu, Jie Geng, Chunli Liu
Kosinski, Stillwell, and Graepel (2013) showed that users’ online behavior record can be used to accurately estimate their Big Five personality. The authors employed the singular value decomposition (SVD) model to reduce the dimensions of the user-likes matrix. SVD is a type of matrix decomposition and an important feature dimension reduction method in the field of machine learning. Moreover, it is widely used for recommendation systems and natural-language processing in practice. M*N-dimensional matrix A is decomposed by truncated SVD as , where . Calculated matrices and contain the largest eigenvectors of and , respectively, thereby achieving dimensionality reduction.
Knowledge Reduction in Formal Contexts through CUR Matrix Decomposition
Published in Cybernetics and Systems, 2019
K. Sumangali, Ch. Aswani Kumar
Among matrix decomposition methods SVD/Eigen value-based decomposition methods are very popular in the process of knowledge discovery (Cheung and Vogel 2005; Aswani Kumar 2011; Aswani Kumar and Srinivas 2010; Codocedo, Taramasco, and Astudillo 2011). Mahoney and Drineas (2009) outline the drawbacks of SVD/Eigen value-based matrix decomposition methods. They have observed the following facts which cause SVD/Eigen value decomposition methods to be challenging in knowledge discovery process. The datastructures in reality do not obey the mathematical operations on the dataThe orthogonalization process in existing methods destroy the sparsity of the data converting it into a dense oneThe non-negativity in data is convex by nature and is not a notion of linear algebraLinear combinations of vectors do not give meaningful interpretationsThe matrix is decomposed in terms of uncorrelated/orthogonal vectors of decreasing importance
A Bayesian Partially Observable Online Change Detection Approach with Thompson Sampling
Published in Technometrics, 2023
Compared with traditional multivariate change detection, two particular challenges for high-dimensional change detection have attracted extensive attention. The first challenge is the complex correlation structure and curse of dimensionality of high-dimensional data. Existing works commonly use dimension reduction methods, such as dictionary learning and matrix decomposition methods (Mairal et al. 2008; Allen, Grosenick, and Taylor 2014), to describe the high-dimensional data in the feature level. The second is how to detect the change with sparse anomalous patterns efficiently. Compared with high-dimensional normal signals, the changed pattern may only influence very few anomalous patterns in the abnormal dictionary, making detection problems very challenging. Borrowing from the recent literature for decomposition-based anomaly detection schemes such as sparse principal component analysis (Zhang et al. 2018), smooth-sparse decomposition (Yan, Paynabar, and Shi 2017, 2018), typical high-dimensional signals can be decomposed as the background dictionary and anomaly dictionary (i.e., after-change data only), separately. When a change happens, it can be linearly represented by a few anomalous patterns in the dictionary (Mo et al. 2013). Considering the dictionary is large, the linear representation is regularized to be sparse. All the current works for high-dimensional data sequential sparse change detection focus on a fully observable process, that is, at each sampling time point, all the p variables in are observed for analysis. Furthermore, the existing decomposition-based methods are usually solved by regularized likelihood inference, which fails to model the uncertainty of the background and anomaly components.