Explore chapters and articles related to this topic
Linear Algebra
Published in James P. Howard, Computational Methods for Numerical Analysis with R, 2017
Like the LU decomposition, the Cholesky decomposition can be used to solve matrix equations. Further, finding the Cholesky decomposition is notably faster than the LU decomposition. However, it is more limited. The Cholesky decomposition can only be used on symmetric positive definite matrices. Symmetric matrices are matrices that are symmetric about the main diagonal; mathematically, for all i and j, ai,j=aj,i $ a_{i,j} = a_{j, i} $ , for a matrix A. Positive definite means that each of the pivot entries is positive. In addition, for a positive definite matrix, the relationship xAx $ \mathbf {x}A\mathbf {x} $ > 0 for all vectors, x. This has applications for curve fitting and least squares approximations.
Solving systems of algebraic equations
Published in Victor A. Bloomfield, Using R for Numerical Analysis in Science and Engineering, 2018
The Cholesky decomposition is a special case of the LU decomposition for real, symmetric, positive-definite square matrices. It is invoked from base or Matrix with chol(). chol2inv in base R computes the inverse of a suitable matrix from its Cholesky decomposition.
A novel sparse linear mixed model for multi-source mixed-frequency data fusion in telemedicine
Published in IISE Transactions on Healthcare Systems Engineering, 2023
Wesam Alramadeen, Yu Ding, Carlos Costa, Bing Si
Among a few existing efforts in sparse learning of linear mixed models, most studies can select fixed effects only, and sparse selection of random effects is more challenging. Indeed, there are significant differences in sparse variable selection between fixed effects and random effects. The fixed effects are characterized by regression coefficients that be easily selected by imposing an -1 penalty. In contrast, the random effects are characterized by a covariance matrix, and it is not straightforward to penalize a covariance matrix using the lasso penalties. To address this challenge, the modified Cholesky decomposition was proposed to decompose the covariance matrix into in which is a diagonal matrix and is lower triangular matrix with all the diagonal elements being ones (Bondell et al., 2010; Ibrahim et al., 2011). If one diagonal element in is penalized to be zero, the corresponding row and column in the covariance matrix becomes zeros, eventually leading to the elimination of the corresponding random effect. However, none of these existing studies considered group or structured selection of fixed and random effects brought in by multi-source features.
An online approach for robust parameter design with incremental Gaussian process
Published in Quality Engineering, 2023
Xiaojian Zhou, Yunlong Gao, Ting Jiang, Zebiao Feng
Additional clarification is required, Equations (4) (5) and (7) show that the inverse of covariance matrix needs to be calculated for both hyperparameter optimization and regression prediction. Since the covariance matrix of GP is symmetric positive definite, we can use the Cholesky decomposition to compute the inverse of the covariance matrix with the help of the matrix decomposition idea. Cholesky decomposition is a fast and numerically stable way to solve the equations, which is widely used in matrix inversion.