Explore chapters and articles related to this topic
Machine Learning for Disease Classification: A Perspective
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Jonathan Parkinson, Jonalyn H. DeCastro, Brett Goldsmith, Kiana Aran
The insight in the previous example is that relationships that are nonlinear in low-dimensional spaces may be approximated by a linear model in a higher-dimensional space. Finding the appropriate basis expansion, however, may be nontrivial and may incur an increased risk of overfitting. Computing similarity between datapoints post-basis expansion may also be computationally expensive. Kernel methods use a kernel function to implicitly map the data into a higher dimensional (or even infinite-dimensional) space, achieving the same end effect as an explicit mapping without the computational cost – an approach sometimes called the “kernel trick”. The kernel function assesses the similarity between any two datapoints. A variety of kernel functions including the squared exponential kernel, the Matern kernel, and the spectral mixture kernel have been described in the literature (Genton, 2001; Wilson & Adams, 2013).
Discriminative and Generative Model Learning for Video Object Tracking
Published in Rashmi Agrawal, Marcin Paprzycki, Neha Gupta, Big Data, IoT, and Machine Learning, 2020
Vijay K. Sharma, K. K. Mahapatra, Bibhudendra Acharya
We propose a parameter learning technique based on kernel similarity. The inner product between two vectors can be computed in high-dimension space with the help of a kernel trick. There is no need for explicit mapping of each pattern to a high-dimension space, thus avoiding computational burden. In general, a kernel function takes two data patterns and computes the similarity (or dot product, or distance) between them in high-dimension space, also called the feature space, using implicit feature mapping. Functions that satisfy Mercer’s condition can be used as a kernel function. The Gaussian kernel is one of them, and maps the data into infinite dimension space to compute the similarity. The proposed method is given in Algorithm 1. In this method, the parameter vector is updated using positive and negative examples near the boundary (Vt, k). For each vector of vector set Vt,k, the parameter is modified iteratively until kernel similarity of positive and negative examples closest to the boundary in the current frame simultaneously satisfy the low and high similarity value with the value in the previous frame. The updated parameter is the one that has a high score as well as a high margin.
Machine learning in designing Theo Jansen linkage
Published in Artde D.K.T. Lam, Stephen D. Prior, Siu-Tsen Shen, Sheng-Joue Young, Liang-Wen Ji, Engineering Innovation and Design, 2019
Min-Chan Hwang, Longji Xu, Chiou-Jye Huang, Feifei Liu
The selection of a kernel function is heavily dependent on the data specifics. In general, the Gaussian kernels, also called Radial Basis Functions (RBFs), are mostly applied in the absence of prior knowledge. Then the solution to the primal problem in (12) can be found efficiently by solving a dual problem as follows. () maxλj∑jλj−0.5∑j∑kλjλkyjykK(xj,xk)
A support vector machine model for due date assignment in manufacturing operations
Published in Journal of Industrial and Production Engineering, 2023
Doraid Dalalah, Udechukwu Ojiako, Khaled A. Alkhaledi, Alasdair Marshall
This is referred to as the kernel trick. The transformation function φ (.) is represented by the Eigen functions of the kernel K (xi, xj) (a concept in functional analysis). In general, the exact transformation does not attract attention, instead, the kernel function is the one that is usually specified. The kernel function is simply an inner product representative of a similarity measure between the data objects. It is important to emphasize that the selected kernel has to satisfy Mercer function. Such condition implies that the kernel squared matrix of the (i, j) entries is always positive definite. Many kernel functions do exist, the simplest is the polynomial of degree d which is given by . When d = 1, this reduces to a straight line separator. Radial basis kernel is given by where σ is a kernel parameter. Power kernels are given by
Respiratory Effort Signal Based Sleep Apnea Detection System Using Improved Random Forest Classifier
Published in IETE Journal of Research, 2021
Anju Prabha, Jyoti Yadav, Asha Rani, Vijander Singh
In this work, mSVM is implemented with a one-versus-all strategy [38]. SVM is a supervised learning algorithm that finds an optimal separating plane to reduce generalization error by maximizing the margin [39]. In the case of non-linear data, a kernel function maps the training data to a higher dimensional space, where the data can be linearly separated. Different types of kernels are used in SVM, like polynomial, linear, radial basis function (RBF), etc. For given training data in the form (x1, y1), … , (xp, yp), where i = 1, … , p, xi ε Rn is the train data and yi ε 1, … , k is the class label. For k classes, k SVM models are constructed. For the mth SVM, positive labels are trained using the mth class and negative labels are assigned to the other classes. The data is mapped to a higher dimensional plane and the hyperplanes are found by minimizing the margin. Among the k decision functions, the decision function with the highest value decides on the class. However, for multi-class classification, the RF offers comparable accuracies and in some cases does better because of its ease of use and ability to learn both simple and complex classification functions [40], which is described below.
Analysis of classical and machine learning based short-term and mid-term load forecasting for smart grid
Published in International Journal of Sustainable Energy, 2021
Support vector machine (SVM) is a powerful machine learning tool based on supervised learning for classification and regression analysis which is named SVC and SVR. SVR is a non-parametric technique as it relies on the kernel function. The kernel is a way of calculating the dot product of two vectors in a very high dimensional feature space, and the kernel function is a function used to map lower-dimensional data into higher dimensional data (Zhang, Wang, and Zhang 2017). The kernel can be linear, polynomial, and radial basis function (RBF) or Gaussian kernel. The choice of kernel function depends on the type of dataset used for modeling. In this work, a polynomial kernel is used to train the model, since the data used is highly nonlinear. The polynomial kernel is defined by Equation (8) as: Where; n is the degree of the polynomial and c is the constant term (Zhang, Wang, and Zhang 2017).