Explore chapters and articles related to this topic
Implementation of Data-Driven Approaches for Condition Assessment of Structures and Analyzing Complex Data
Published in M.Z. Naser, Leveraging Artificial Intelligence in Engineering, Management, and Safety of Infrastructure, 2023
Vafa Soltangharaei, Li Ai, Paul Ziehl
Principal component analysis (PCA) is a method to reduce the dimensionality of a data set. Many features can be extracted from AE signals, such as duration, counts, amplitude, peak frequency, energy, etc. However, working with all features and finding the relation between them is difficult. PCA can reduce the dimensionality of a data set by projecting the data on new coordinates. Input for a PCA is a matrix where columns are features (variables) and rows are observations (hits). PCA initially calculates a covariance of the input matrix. Then, eigenvalue analysis is conducted on the covariance matrix, resulting in eigenvalues and eigenvectors. The number of eigenvalues and eigenvectors is the same as the number of features in the input matrix. The eigenvectors have components equal to the number of features. The eigenvalues and corresponding eigenvectors are sorted from the largest to smallest values. Then, the original input matrix is transferred to the new space by multiplying a matrix, which contains all eigenvectors. According to the eigenvalues, the least important principal components can be deleted without losing a significant amount of information.
Machine Learning for Compositional Data Analysis in Support of the Decision Making Process
Published in Kim Phuc Tran, Machine Learning and Probabilistic Graphical Models for Decision Support Systems, 2023
Thi Thuy Van Nguyen, Cédric Heuchenne, Kim Phuc Tran
PCA is a well-known method for reducing the dimension of data while preserving as much of variance of the dataset as possible. PCA was firstly introduced by Karl Pearson41 in 1901 and independently studied and developed later by Hotelling38 in 1933 and has become one of the most valuable results of applied algebra nowadays. Due to its simplicity and robust ability to extract relevant information from confusing data, PCA is widely used in many domains such as neuroscience, quantitative finance, facial recognition, image compression, etc., see, for example, 50;48;49;36 among many others for more details.
Machine Learning
Published in Seyedeh Leili Mirtaheri, Reza Shahbazian, Machine Learning Theory to Applications, 2022
Seyedeh Leili Mirtaheri, Reza Shahbazian
Principal Component Analysis (PCA), is a dimensionality reduction method that is often used to reduce the dimensionality of large datasets by transforming a large set of variables into a smaller one that still contains most of the information from the large set. Reducing the number of variables of a data set naturally comes at the expense of accuracy, but the trick in dimensionality reduction is to trade a little accuracy for simplicity. Because smaller datasets are easier to explore and visualize they make analyzing data much easier and faster for machine learning algorithms without the extraneous variables to process. In nutshell, the idea of the principal component analysis method is to reduce the number of variables of a dataset, while preserving as much information as possible.
Assessing stope performance using georeferenced octrees and multivariate analysis
Published in Mining Technology, 2023
Benoît McFadyen, Martin Grenon, Kyle Woodward, Yves Potvin
Multivariate statistical analysis considers the 13 parameters previously presented and allows for the identification of complex relations between the parameters. The two multivariant methods used are PCA and PLS presented in Section ‘Identification of critical parameters through statistical analysis’. Since the parameters used have different units, they first need to be normalised in order to have the same weight in the analysis. In this case, the distribution of a parameter is transformed to have a mean of 0 and a standard deviation of 1. For the PCA analysis, the variance explained by each component diminishes with each component, containing less and less information pertinent for the global understanding of the parameter’s relationship. Therefore, components are excluded from the analysis. For the PLS analysis, the covariance between the parameters and stope performance is analysed and because there is only one stope performance parameter, there is therefore only one component to analyse.
Seismic risk assessment for the North Eastern Region of India by integrating seismic hazard and social vulnerability
Published in Sustainable and Resilient Infrastructure, 2023
Navdeep Agrawal, Laxmi Gupta, Jagabandhu Dixit, Sujit Kumar Dash
The principal component analysis (PCA) is a prevalent factor analysis and data reduction technique that reduces a large set of variables to a smaller set of orthogonal components (principal components), explaining most of the variance among the applied dataset (Jolliffe, 2002). In factor analysis, the sample adequacy of the applied dataset is checked using KMO (Kaiser-Meyer-Olkin) and Bartlett’s test. The KMO statistics values range from 0 to 1, and values close to 1 suggest an adequate dataset to proceed with factor analysis or PCA. For PCA, the KMO value should be at least 0.6; for PCA results to be reliable, the value should be greater than 0.8 (Chakraborty et al., 2020). Bartlett’s sphericity test was also employed to confirm the factor selection, which tests the null hypothesis for the correlation matrix. The output was in the form of the p-value of chi-squared statistics. The small p-value (<0.05) suggests that the pairwise correlation matrix is not an identity matrix, and the dataset is suitable for PCA (Sharma, 1996). In the present study, a KMO value of 0.897 (>0.8) and a p-value of 0.000 (<0.05) in Bartlett’s test were obtained, which indicates sufficient data adequacy for statistical analysis.
Performance Analysis of Artificial Neural Network for Hand Movement Detection from EMG Signals
Published in IETE Journal of Research, 2022
Angana Saikia, Sushmi Mazumdar, Nitin Sahai, Sudip Paul, Dinesh Bhatia
Principal component analysis (PCA) is one of the common methods in the field of pattern recognition which is based on the appearances for use as classical linear methods [29]. The main application of PCA is to reduce the dimensionality of data set in which there are a large number of interrelated variables. PCA analysis methods are capable to identify and expressing all dataset in a way, so as to differentiate their similarities and differences [30]. It is a dimensionality reduction technique which is used for compression and movement recognition problems [31]. It is also known as Eigen space projection or Karhunen–Loeve transformation. PCA calculates the Eigen vectors of the covariance matrix and projects original data onto a lower dimensional feature space, which is defined by Eigen vectors calculated, are referred to as Eigen faces [29].