Explore chapters and articles related to this topic
Role of Dimensionality Reduction Techniques for Plant Disease Prediction
Published in Utku Kose, V. B. Surya Prasath, M. Rubaiyat Hossain Mondal, Prajoy Podder, Subrato Bharati, Artificial Intelligence and Smart Agriculture Technology, 2022
Muhammad Kashif Hanif, Shaeela Ayesha, Ramzan Talib
Kernel principal component analysis (KPCA) is a variant of traditional PCA used to transform high dimensional data into lower dimensions based on non-linear transformation functions (Liu & Lai et al., 2020). As compared to PCA, KPCA computes the eigenvectors of the kernel matrix. Kernel functionality enables KPCA to perform non-linear transformations of high dimensional data into lower dimensions. Polynomial and Gaussian kernels make KPCA suitable for non-linear transformations (Yu et al., 2021; Tummala, 2021). Imaging data reduced using KPCA offers better classification accuracy (Yu et al., 2021). KPCA follows an unsupervised learning approach to reduce the dimensionality of data. The selection of the kernel parameters affects the performance of KPCA. Many variants of KPCA have been developed to improve its efficiency for different applications (Binol, 2018).
Dimension Reduction Techniques
Published in Rashmi Agrawal, Marcin Paprzycki, Neha Gupta, Big Data, IoT, and Machine Learning, 2020
Muhammad Kashif Hanif, Shaeela Ayesha, Ramzan Talib
Kernel Principal Component Analysis (KPCA) is an extension of classical PCA to perform nonlinear transformations. Instead of calculating the covariance of a matrix, KPCA calculates the principal Eigen vectors of the kernel matrix. Kernel property makes PCA suitable for nonlinear mapping (Van Der Maaten et al. 2009; Xie et al. 2016). KPCA uses polynomial and Gaussian kernel that makes its functionality limited for manifold learning. Variants of KPCA such as Subset KPCA (SKPCA) were introduced to reduce the computational complexities of KPCA for dimension reduction and classification. Multi-Scale KPCA (MSKPCA) was developed as a fault diagnostic method for nonlinear process monitoring (Chen et al. 2018). Other applications of KPCA include image classification, sensor data, medical and bio-informatics.
Process Monitoring
Published in Jose A. Romagnoli, Ahmet Palazoglu, Introduction to Process Control, 2020
Jose A. Romagnoli, Ahmet Palazoglu
Independent component analysis (ICA)9,10 is used in multivariate signal separation for extracting hidden and statistically independent components (ICs) from the observed data and can be adopted for process monitoring tasks similar to PCA.11,12 In this technique, signal source separation recovers the independent signals after linear mixing. On the other hand, Kernel Principal Component Analysis (KPCA)13 extends the traditional PCA to capture features of nonlinear data spaces. Instead of directly taking eigenvalue decomposition of the covariance matrix like PCA, KPCA takes a data set with nonlinear features that that PCA fails to preserve and projects them to a higher dimensional space where they vary linearly.
Nonlinear dynamic process monitoring using deep dynamic principal component analysis
Published in Systems Science & Control Engineering, 2022
Simin Li, Shuanghua Yang, Yi Cao, Zuzhen Ji
The PCA is to extract several orthogonal principal components from the original multi-dimensional data space, thereby simplifying the analysis model. These principal components contain most variations of the original data (Wold et al., 1987). The PCA was developed based on the presumptions that the system is linear and static, however, most industrial processes are nonlinear and dynamic. Kramer (1991) proposed a nonlinear principal component analysis (NLPCA) method which uses a feedforward neural network to represent the feature mapping process such that the network inputs are reproduced at the output layer. Lee et al. (2004) developed another nonlinear process monitoring method, so called as the kernel principal component analysis (KPCA), which calculates the principal components in a high-dimensional feature space using a kernel function, such as gaussian, polynomial and so on. To extend PCA application to dynamic systems, Ku et al. (1995) applied a ‘time lag shift’ strategy to contain the dynamic information resulting in the so-called Dynamic PCA (DPCA).
Transforming post-mining area into expressway site by stability evaluation with clustering method: A case study
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2021
Song Guo, Guangli Guo, Huaizhan Li, Xiangsheng Yang
The kernel principal component analysis algorithm (KPCA), as one of the unsupervised machine learning techniques improved from the conventional principal component analysis algorithm (PCA) with kernel function, is mainly applied to reduce the high dimensionality of datasets (Hsieh and Tung 2009; Liu and Chen 2015). By selecting the basic features of a certain dataset with kernel matrix K, this dimensionality reduction method eliminates the redundancy among original components through variances and eigenvalue decomposition. After dimensionality reduction, an orthogonal matrix was constructed with fewer indicators to describe the features of datasets, it can effectively reduce the computational complexity and maintain the ability to describe high-order redundancy (Lakhera, Saxena, and Darji 2011). A typical example to compare the performance of PCA with kernel function for a two-dimensional dataset is shown in Figure 1. It is effective to apply linear mapping to reduce the dataset dimension in Figure 1(a). While, when the linear mapping is applied to dataset dimension reduction with a large amount of overlap in Figure 1(b), it is difficult to achieve satisfactory results, since the linear mapping is difficult to handle the overlap between two different datasets distribution reducing into one dimension (Narukawa and Torra 2005).
On the combination of kernel principal component analysis and neural networks for process indirect control
Published in Mathematical and Computer Modelling of Dynamical Systems, 2020
A. Errachdi, S. Slama, M. Benrejeb
However, PCA is a linear time/space separation method and cannot be directly applied to non-linear systems [30]. Non-linear PCA has also been developed by using different algorithms. Kernel principal component analysis (KPCA) is a non-linear PCA developed by using the kernel method. Kernel method is originally used for Support Vector Machine (SVM). Later, it has been generalized into many algorithms having the term of dot products such as PCA. Specifically, KPCA firstly maps the original inputs into a high-dimensional feature space using the kernel method and then calculates PCA in the high-dimensional feature space. The linear PCA in the high-dimensional feature space corresponds to a non-linear PCA in the original input space. Recently, another linear transformation method called independent component analysis (ICA) is also developed. Instead of transforming uncorrelated components, ICA attempts to achieve statistically independent components in the transformed vectors. ICA is originally developed for blind source separation. Later, it has been generalized for feature extraction [7].