Explore chapters and articles related to this topic
Machine Learning for Disease Classification: A Perspective
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Jonathan Parkinson, Jonalyn H. DeCastro, Brett Goldsmith, Kiana Aran
The insight in the previous example is that relationships that are nonlinear in low-dimensional spaces may be approximated by a linear model in a higher-dimensional space. Finding the appropriate basis expansion, however, may be nontrivial and may incur an increased risk of overfitting. Computing similarity between datapoints post-basis expansion may also be computationally expensive. Kernel methods use a kernel function to implicitly map the data into a higher dimensional (or even infinite-dimensional) space, achieving the same end effect as an explicit mapping without the computational cost – an approach sometimes called the “kernel trick”. The kernel function assesses the similarity between any two datapoints. A variety of kernel functions including the squared exponential kernel, the Matern kernel, and the spectral mixture kernel have been described in the literature (Genton, 2001; Wilson & Adams, 2013).
What Is Industry 5.0?
Published in Pau Loke Show, Kit Wayne Chew, Tau Chuan Ling, The Prospect of Industry 5.0 in Biomanufacturing, 2021
Omar Ashraf ElFar, Angela Paul A/p Peter, Kit Wayne Chew, Pau Loke Show
Deep learning (DL) could be used in architecture, according to Weston et al. (2012), “We show how nonlinear semi-supervised embedding algorithms popular for use with “shallow” learning techniques such as kernel method. Kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM). Any linear model can be turned into a non-linear model by applying the kernel trick to the model can be easily applied to deep multi-layer architectures, either as a regularize at the output layer, or on each layer of the architecture (Zoppis 2019). Compared to standard supervised backpropagation this can give significant gains. This trick provides a simple alternative to existing approaches to semi-supervised deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques”.
Anomaly Detection in a Fleet of Systems
Published in Ashok N. Srivastava, Jiawei Han, Machine Learning and Knowledge Discovery for Engineering Systems Health Management, 2016
In this section, we describe kernel-based methods for anomaly detection in more detail than we devoted to other algorithms because of recent results demonstrating the strong promise of kernel methods for anomaly detection in heterogeneous data sets. In general, a kernel function, which is the heart of any kernel method, can be informally thought of as a measure to calculate the similarity between two data points. The use of certain kernel functions turns out to be equivalent to mapping n-dimensional input data into a high dimensional (possibly infinite dimensional) feature space and then using linear methods within that feature space. This is possible because vectors in the high-dimensional feature space are only present as part of dot products, which are always scalars. Tasks like anomaly detection, classification, clustering, or regression are then performed in this high-dimensional feature space. This method of mapping the data into a high-dimensional space and using linear methods rather than using nonlinear methods in the original input space yields the benefits of linear methods (well-established theory, guarantee of achieving the optimal solution given a fixed training set) with the representational benefits of a nonlinear method. Support vector–based classification, regression, and anomaly detection are classical examples of kernel based methods. Some popular kernel based techniques are shown in shaded (solid color) blocks in Figure 3.1.
Analyse vehicle–pedestrian crash severity at intersection with data mining techniques
Published in International Journal of Crashworthiness, 2022
The support vector machine (SVM) algorithm is a type of kernel method commonly applied for supervised learning [51]. Comparable or better results than other statistical and machine learning methods has been demonstrated by SVM models [35,52]. In this paper, as feature analysis is required to identify and explain the most significant crash-contributing factors to vehicle–pedestrian crashes at intersections, the SVM method with linear kernel is applied. The linear discriminant function is shown in Equation (3): where w denotes the weight vector and b denotes the bias. The other types of non-linear kernels are not considered in this paper as the original data are transformed by the kernel method and mapped to another feature space.
Kernel Recursive Least Square Approach for Power System Harmonic Estimation
Published in Electric Power Components and Systems, 2021
Omar Avalos, Erik Cuevas, Héctor G. Becerra, Jorge Gálvez, Salvador Hinojosa, Daniel Zaldívar
Kernel-based methods [20–22] have been applied to formulate learning problems into a non-linear perspective for several supervised and unsupervised methodologies [20, 23]. Many classification or regression problems are not suitable to be solved using traditional linear models due to the spatial relationships among the input data. For that, kernel methods are applied to increase the data dimensionality allowing complex problems to be solved into high-dimensional spaces using linear methods. The idea behind kernel methods is applied to pairs of input–output vectors that can be interpreted as inner products in high-dimensional feature spaces (i.e., Hilbert space). Such interpretation promotes linearity, convexity, and universal approximation [18] simplifying the learning process of input data that describes complex spatial relationships in low-dimensional space. However, increasing dimensionality may result in overfitting. Overfitting occurs as a phenomenon where the goodness-of-fit for the training data is achieved but the prediction test performs poorly. Under such conditions, several dimensional reduction techniques have been proposed to maintain only a certain number of distinctive feature vectors of the input data to overcome overfitting issues.
Change detection using least squares one-class classification control chart
Published in Quality Technology & Quantitative Management, 2020
Kernel methods are very flexible and have various forms to handle different problems. Recent proposals have suggested new control charts based on one-class classification algorithms. Sun and Tsung (2003) proposed kernel-distance-based charts (K charts) based on a Support Vector Data Description (SVDD) algorithm. Maboudou-Tchao, Silva, and Diawara (2016) suggested SVDD control chart using Mahalanobis kernel. Kumar, Choudhary, Kumar, Shankar, and Tiwari (2006) used another OCSVM technique to construct robust K charts through normalized monitoring statistics. Maboudou-Tchao (2017) proposed a control chart based on Support Matrix Data Description (SMDD) to monitor the covariance matrix. In addition, they showed that their control chart can monitor the covariance matrix of a process when the rational subgroups have fewer observations than variables, and hence, the covariance matrix for high-dimensional data.