Explore chapters and articles related to this topic
Machinery Fault Detection using Artificial Intelligence in Industry 4.0
Published in Ketan Kotecha, Satish Kumar, Arunkumar Bongale, R. Suresh, Industry 4.0 in Small and Medium-Sized Enterprises (SMEs), 2022
Pooja Kamat, Sıtkı Akıncıoğlu, Rekha Sugandhi
Support vector machines are a class of machine learning techniques that have been devised to solve both classification and regression-based problems. First and foremost, their scientific roots have been thoroughly investigated, and they provide a convex optimisation technique ensuring that global optimum is achieved. In addition, they may use a non-linear transformation in the form of a kernel that even allows SVMs to be considered a dimensionality reduction technique (W. Wang et al. 2003). One-class SVMs were designed for scenarios in which just one class has been established and the difficulty is detecting outliers outside of it. This is known as novelty detection, and it refers to the automatic detection of unusual or unexpected events (Pimentel et al. 2014). As a result, one-class SVMs are widely utilised in anomaly identification. Figure 4.5 depicts the architecture of a one-class SVM algorithm.
Performance and Feasibility Model Generation Using Learning-Based Approach
Published in Soumya Pandit, Chittaranjan Mandal, Amit Patra, Nano-Scale CMOS Analog Circuits, 2018
Soumya Pandit, Chittaranjan Mandal, Amit Patra
The use of a kernel function allows the SVM representation to be independent of the dimensionality of the input space. The first step of construction of an LS-SVM model is the selection of an appropriate kernel function. For the choice of kernel function K(α¯k, α¯) there are several alternatives. Some of the commonly used functions are listed in Table 4.8, where d, σ, k, and θ are constants, referred to as hyperparameters. In general, in any classification or regression problem, if the hyperparameters of the model are not well selected, the predicted results will not be good enough. Optimum values for these parameters therefore need to be determined through proper tuning methods. Note that the Mercer condition holds for all σ and d values in the radial basis function (RBF) and the polynomial case, but not for all possible choices of κ and θ in the multi-layer perceptron (MLP) case. Therefore, the MLP kernel will not be considered.
Big Data in Medical Image Processing
Published in R. Suganya, S. Rajaram, A. Sheik Abdullah, Big Data in Medical Image Processing, 2018
R. Suganya, S. Rajaram, A. Sheik Abdullah
Support Vector Machine is a supervised learning technique used for classification purposes. For supervised learning, a set of training data and category labels are available and the classifier is designed by exploiting this prior known information. The binary SVM classifier uses a set of input data and predicts each given input, classifying where the two possible classes of data belongs. The original data in a finite dimensional space is mapped into a higher dimension space to make the separation easier. The vector classified closer to the hyper plane is called support vectors. The distance between the support vector and the hyper plane is called the margin; the higher marginal value given the lower the error of the classifier. The separation among higher and lower dimensions is described in Figure 4.
An integrated approach based landslide susceptibility mapping: case of Muzaffarabad region, Pakistan
Published in Geomatics, Natural Hazards and Risk, 2023
Mubeen ul Basharat, Junaid Ali Khan, Hazem Ghassan Abdo, Hussein Almohamad
SVM defines the margin of the hyperplane by using support vectors. Centered on the statistical methodology, SVM can distinguish the optimum hyperplane for differentiating two classes (Kavzoglu et al. 2014; Pham et al. 2016). Suppose that the vector of landslide conditioning factors is X = x1, x2, …, xn, and the vector of classified variables (non-landslide and landslide) is represented by Y. The optimum distinguishing hyperplane can be established by resolving the subsequent classification function: where, is constant, c signifies the offset from the origin of the hyperplane, n represents the total number of conditioning factors and is the kernel function. In the present study, the kernel function used is the Gaussian Radial basis function. Aimed at a binary classification problem such as the present problem of landslide involving non-landslide and landslide points, the constraint condition for solving the equation is: where if Y = +1 and if Y = −1. In the above condition, W is the weighting factor, and h(X) structures a non-linear function that separates the input space from high-dimension spaces.
Monitoring exhaust air temperature and detecting faults during milk spray drying using data-driven framework
Published in Drying Technology, 2023
Support vector machine is the supervised learning algorithm which is used to analyze the data for classification and regression analysis. SVM maps the training examples to points in space so as to maximize the width of the gap between the two categories. The new examples are then mapped into the same space and predicted to belong to a category based on which side of the gap they fall. The data points in support vector machine are viewed as a p-dimensional vector and we can separate such points with a p-1-dimensional hyper plane. There are many hyper planes that might classify the data. One reasonable choice as the best hyper plane is the one that represents the largest separation or margin between the two classes. The hyper plane can be chosen based on the distance from it to the nearest data point and each side is maximized. Such hyperplane is known as maximum-margin hyperplane. A hyperplane can be written as equation (13) where w is the normal vector to the hyperplane, x is the set of data points, is the offset of the hyperplane from the origin along the normal vector w. The performance of these machine learning classifier models is governed by calculating the accuracy which is given by equation (14). where Oc are the number of correct predictions and OT is the number of actual and predicted data points.
Smart Cities-Based Improving Atmospheric Particulate Matters Prediction Using Chi-Square Feature Selection Methods by Employing Machine Learning Techniques
Published in Applied Artificial Intelligence, 2022
Hanan Abdullah Mengash, Lal Hussain, Hany Mahgoub, A. Al-Qarafi, Mohamed K Nour, Radwa Marzouk, Shahzad Ahmad Qureshi, Anwer Mustafa Hilal
For supervised learning methods, SVM is one of the most robust methods used for classification purposes. Recently, SVM was excellently used for pattern recognition problems (Vapnik 1999), machine learning (Gammerman et al. 2016), and medical diagnosis area (Dobrowolski, Wierzbowski, and Tomczykiewicz 2012; Subasi 2013). Moreover, SVM is used in a variety of applications such as recognition and detection, text recognition, content-based image retrial, biometrics, speech recognition, etc. SVM construct a hyperplane or set of hyperplanes in infinite or high dimensional space using kernel trick to separate the nonlinear data with larger margin. The good classification separation is achieved with larger margin, which indicates the lower generalization error of the classifier. SVM tries to find a hyperplane that gives the largest minimum distance to the training example. In SVM theory, this name is also known as margin. For the maximized hyperplane, the optimal margin is obtained. SVM has another important characteristic that gives the greater generalization performance. SVM is basically, a two-category classifier, which transformed data into a hyperplane depending on the nonlinear training data or higher dimension.