Explore chapters and articles related to this topic
Computer-Aided Diagnosis Systems for Prostate Cancer Detection
Published in Ayman El-Baz, Gyan Pareek, Jasjit S. Suri, Prostate Cancer Imaging, 2018
Guillaume Lemaître, Robert Martí, Fabrice Meriaudeau
Sparse-PCA is another approach for feature extraction and dimension reduction [263]. Similarly to PCA, this approach projects the data as a linear combination of input data. However, instead of using original data, it uses a sparse representation of the data, and therefore projects them as a linear combination of a few input components rather than all of them. Referring to Equation 10.55, the cost function of sparse-PCA is formulated to maximize the variance while maintaining the sparsity constraint: () arg maxv−1Σv,subject to∥v∥2=1,∥v∥0≤k,
True sparse PCA for reducing the number of essential sensors in virtual metrology
Published in International Journal of Production Research, 2023
Yifan Xie, Tianhui Wang, Young-Seon Jeong, Ali Tosyali, Myong K. Jeong
In recent years, research interest in sparse PCA has increased dramatically. Owing to the NP-hard nature of normal sparse PCA, a variety of algorithms have been devised to approach the optimal solution. To increase the sparsity of elements, Jolliffe, Trendafilov, and Uddin (2003) employed the LASSO technique to devise a modified PCA called SCoTLASS. To obtain a satisfactory result with sparse PCA, Zou, Hastie, and Tibshirani (2006) developed a regression framework called SPCA, wherein they introduced sparsity by adding an penalty to the regression function. By applying norm approximation to the cardinality restriction of standard sparse PCA, Sriperumbudur, Torres, and Lanckriet (2007) proposed a relaxation approach called DC-PCA. In addition, a few greedy search algorithms have been developed, such as GSPCA by Moghaddam, Weiss, and Avidan (2006) and PathSPCA by d'Aspremont, Bach, and El Ghaoui (2008). Moreover, compared to conventional PCA, the majority of sparse PCA algorithms compromise the orthogonality attribute. To avoid this tradeoff and produce orthogonal sparse PCs, Benidis et al. (2016) developed orthogonal sparse PCA (OrthSPCA). These algorithms have been used successfully to obtain sparse and interpretable PCs from a given set of raw data in diverse scenarios. To boost the computational speed, sparse PCA and other sparse variants of PCA can be applied to VM processes (Arima et al. 2019).
Hierarchical sparse functional principal component analysis for multistage multivariate profile data
Published in IISE Transactions, 2021
To address high-dimensional settings and enhance model interpretability simultaneously, Sparse PCA (SPCA) has been proposed as an intuitively appealing solution where only significant entries are kept in an eigenvector. Instead of manually thresholding small-value entries of eigenvectors which may yield misleading results (Cadima and Jolliffe, 1995), Zou et al. (2006), Shen and Huang (2008), and Witten et al. (2009) performed PCA through minimizing reconstruction errors and imposed sparsity on eigenvectors by the L1 or LASSO penalty. The SPCA has also been developed in terms of the L0 penalty (d’Aspremont et al., 2008) and a thresholding orthogonal iteration procedure (Ma, 2013). Allen (2013) and Chen and Lei (2015) exploited sparsity in FPCA for a univariate profile, but for multivariate profiles, there are few related works. One exception is a recent work in Zhang et al. (2018b) which combined the SPCA in Zou et al. (2006) and the multi-channel FPCA in Paynabar et al. (2016), and represented each profile with a selected set of orthonormal basis functions. In fact, they imposed sparsity on the PC scores rather than on the eigenvectors. This formulation works well for their profile monitoring purpose, but is not applicable to our variance decomposition problem where we expect sparsity in the eigenvectors such that the significant stages and process variables in each eigenvector can be clearly identified to improve the interpretation of the extracted variance patterns.
Sparse Principal Component Analysis Based on Least Trimmed Squares
Published in Technometrics, 2020
The first sparse PCA approach is the Simplified Component Technique-LASSO (SCoTLASS) of Jolliffe, Trendafilov, and Uddin (2003) which includes a lasso penalty to obtain sparse loadings. The sparse PCA method of Zou, Hastie, and Tibshirani (2006) uses an elastic net penalty. However, both methods face computational challenges in high dimensions. Therefore, faster sparse PCA methods have been proposed. The most popular ones are variations of the “power method.” These methods include sparse PCA via regularized SVD (sPCA-rSVD) (Shen and Huang 2008; Shen, Shen, and Marron 2013), penalized matrix decomposition (Witten, Tibshirani, and Hastie 2009), the generalized power method (Journée et al. 2010), and the truncated power (TPower) method (Yuan and Zhang 2013). Other methods to estimate a sparse PC subspace can be found in Kuleshov (2013), Ma (2013), Cai, Ma, and Wu (2013), and Lei and Vu (2015).