Explore chapters and articles related to this topic
Machine learning classifier for fault classification in photovoltaic system
Published in Rajesh Singh, Anita Gehlot, Intelligent Circuits and Systems, 2021
V.S. Bharath Kurukuru, Mohammed Ali Khan, Ahteshamul Haque, Arun Kumar Tripathi
In general, the SVM algorithm is based on constructing an optimal hyperplane with the maximal margin between support vectors. The selection of support vector is performed by training samples in the boundary region of multiple classes. Another important thing is that the number of the used support vectors is small in comparison to the size of the training data set. If the input data are linearly separable, then the SVM algorithm searches for the optimal hyperplane in the unchanged feature space of the input data. However, in the case the input data are linear non-separable, SVM maps the input data using a so-called kernel function to the higher dimensional feature space. The used kernel function must satisfy Mercer’s theorem (Mercer 1909) and correspond to some type of inner product in the constructed high dimensional feature space. One of the advantages of SVM is its universality as the algorithm allows using different kernel functions, which depend on the researcher’s assumptions and problem domain. Another advantage of the SVM algorithm is its effectiveness in the high dimensional feature spaces.
Gaussian Process Damage Prognosis under Random and Flight Profile Fatigue Loading
Published in Ashok N. Srivastava, Jiawei Han, Machine Learning and Knowledge Discovery for Engineering Systems Health Management, 2016
Aditi Chattopadhyay, Subhasish Mohanty
There are many possible choices for the kernel functions [14]. From a modeling viewpoint, the objective is to select a kernel a priori, which agrees with the assumptions and mathematically represents the structure of the process being modeled. Formally, it is required to specify a function that will generate a positive definite kernel matrix for any set of inputs. In more general terms, the high-dimensional transformation through the assumed kernel function should satisfy Mercer’s theorem of functional analysis[23]. For any two sets of input vectors xi and xj, the kernel function in Equation 6.10 through 6.13 has the following form () k(xi,xj,Θ)=ka(xi,xj,Θ)+kscatter(xi,xj,Θ),
*
Published in Alexander D. Poularikas, Stergios Stergiopoulos, Advanced Signal Processing, 2017
In an SVM, the selection of basis functions is required to satisfy Mercer’s theorem; that is, each basis function is in the form of a positive, definite, inner-product kernel (assuming real-valued data): K(xi,xj)=φ~T(xi)φ~(xj) where xi and xj are input vectors for examples i and j, and φ˜(xi) is the vector of hidden-unit outputs for inputs xi. The hidden (feature) space is chosen to be of high dimensionality so as to transform a nonlinear, separable, pattern classification problem into a linearly separable one. Most importantly, however, in a pattern classification task, for example, the support vectors are selected by the SVM learning algorithm so as to maximize the margin of separation between classes.
Optimal surrogate building using SVR for an industrial grinding process
Published in Materials and Manufacturing Processes, 2022
Ravi Kiran Inapakurthi, Kishalay Mitra
SVR enjoy this popularity partly due to the kernel trick which uses some kernel function (usually Gaussian kernel).[5] These kernels must obey Mercer’s theorem to project the data from data space to feature space.[13] This converts the non-convex objective to a quadratic programming problem.[6] Hyper-parameters of SVR, viz., kernel parameter and other tuning parameters like in SVR and in are usually tuned using trial-based approach for obtaining models of reasonably acceptable error.[14,15] The arbitrary tuning of hyper-parameters coupled with the preference-based selection of the kernel function for approximation of any process does not carry a sound rationale.
A comprehensive survey on machine learning approaches for dynamic spectrum access in cognitive radio networks
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
Support Vector Machine (SVM) is supervised machine learning algorithms used for classification and regression analysis. It is based on statistical learning theory with structural risk minimisation (Awe et al., 2013). It is initially preferred for the classification of complex problems. For a given training set, the input training vector belongs to one or another class of two groups. These two groups are separated by an optimal hyperplane that has the largest distance to the nearest training set of any class (maximal margin classifier) that achieves good separation. It is easier to train SVM when the classes are linearly separable. In non-separable classification, non-linear SVM is obtained by introducing a kernel function. The function is said to be valid kernel if it satisfies Mercer’s Theorem (Hofmann, 2006), which provides necessary and sufficient characterisation of a function as a kernel function. A kernel represents a similarity measure in the form of a similarity matrix (Gram Matrix) between its input objects. The gram matrix fuses all the necessary information for learning algorithms merged in the form of the inner product. For more details, refer (Burges, 1998). SVM to develop a real-time approach to sense PU as shown in Figure 10. The input composite signal includes signal and noise, which is independent of each other in the time domain. The sampled data is classified as PU or not based on testing and training of the SVM classification model.
Volatility forecasting for the shipping market indexes: an AR-SVR-GARCH approach
Published in Maritime Policy & Management, 2022
Jiaguo Liu, Zhouzhi Li, Hao Sun, Lean Yu, Wenlian Gao
where is the kernel function. The selection of kernel function needs to satisfy the Mercer theorem. In fact, the linear kernel function, the polynomial kernel function and the Gaussian kernel function are commonly used in research: