Explore chapters and articles related to this topic
Multi-objective parametric optimization of wire electric discharge machining for Die Hard Steels using supervised machine learning techniques
Published in Rajeev Agrawal, J. Paulo Davim, Maria L. R. Varela, Monica Sharma, Industry 4.0 and Climate Change, 2023
Pratyush Bhatt, Pranav Taneja, Navriti Gupta
where w is weight vector, b is bias and φ(x) is a non-linear function. SVR aims to find a hyperplane within a threshold distance between hyperplane and boundary line [23]. Kernels are numeric relations which transform the input data into higher dimensional data so as to facilitate the determination of the hyperplane. The kernel is selected based on the problem space and dataset. In our model, the kernel is biquadratic (degree = 4 because of 4 input parameters and 4 levels of each input) with regularization parameter (C = 2.0) and error sensitivity parameter (ε = 0.1) set low for wider margins and accurate predictions. The error reduction is achieved by minimizing [22]: 12wT⋅w+C∑i=1lξi+ξi*
Support Vector Machines
Published in Richard J. Roiger, Just Enough R!, 2020
The key to the workings of an SVM is the hyperplane. A hyperplane is simply a subspace one dimension less than its ambient space. To see this, consider Figure 10.1 which displays a two-dimensional space with linearly separable classes. One class is represented by stars and the second by circles. In two dimensions, a hyperplane is a straight line. The figure shows two such hyperplanes labeled h1 and h2. Clearly, there is an infinite number of hyperplanes dividing the classes. So, which hyperplane is the best choice? It turns out that it’s the hyperplane showing the greatest separation between the class instances. This particular hyperplane is unique and has a special name. It is known as the maximum margin hyperplane (MMH). Given a dataset having two classes, the job of the SVM algorithm is to find the MMH. Here’s how it’s done.
Surface damage detection
Published in Michael O’Byrne, Bidisha Ghosh, Franck Schoefs, Vikram Pakrashi, Image-Based Damage Assessment for Underwater Inspections, 2019
Bidisha Ghosh, Michael O’Byrne, Franck Schoefs, Vikram Pakrashi
SVMs are used to classify pixels as being either damaged or undamaged based on the intensity values for each color channel. SVM is a supervised learning classifier based on statistical learning theory. The linear SVM is used for linearly separable data using an (n-1) dimensional hyperplane in n-dimensional feature space (Cortes & Vapnik, 1995; Cristianini & Shawe-Taylor, 2000). This hyperplane is called a maximum-margin hyperplane that ensures maximized distance from the hyperplane to the nearest data points on either side in a transformed space. The linear kernel function is the dot product between the data points and the normal vector to the hyperplane. The kernel function concept is used to simplify the identification of the hyperplane by transforming the feature space into a high dimensional space. The hyperplane found in the high dimensional feature space corresponds to a decision boundary in the input space. The classifier hyperplane is generated based on the previously selected training datasets.
Propeller optimization by interactive genetic algorithms and machine learning
Published in Ship Technology Research, 2023
Ioli Gypa, Marcus Jansson, Krister Wolff, Rickard Bensow
In this case, the optimal hyperplane can be calculated by solving the optimization problem, where C is a penalty parameter for the compromise between margin maximization and training error minimization and is the training error. Finally, in the case of non-linearly separable data, kernels can be used. They map the input data into a high-dimensional space and the optimal hyperplane with the maximum margin is calculated in this space where the data can be linearly separable; this process is called Kernel trick. The Kernel function is described as, where are the transformed feature vectors. Broadly used kernels are the linear, the polynomial, the radial basis function and the sigmoid.
Continuous monitoring of power consumption in urban buildings based on Internet of Things
Published in International Journal of Ambient Energy, 2022
S. Kaushik, K. Srinivasan, B. Sharmila, D. Devasena, M. Suresh, Hitesh Panchal, R. Ashokkumar, Kishor Kumar Sadasivuni, Neel Srimali
The Support Vector Machine algorithm is one of the machine learning techniques that uses multiple hyperplanes to group the data based on the selected feature. The SVM is trained to produce hyperplanes which will separate the data group as human presence and human absence. A set of data based on different conditions and parameters (which include with and without human occupant) are fed to the SVM. And the SVM is trained to take up a decision based on the thermal sensor inputs. In the algorithm, the pins of the controller are values of the sensor which are given as input. The minimum and maximum temperature limits are set. The design of the thermal map consisting of grid values is also provided and viewed via I2C interface, as shown in Figure 3. The various grid values are fed as input to the SVM. The SVM has been previously trained by various sets of input data and their corresponding output as ‘Human presence’, ‘Human absence’. The trained SVN model provides the status of occupancy in the room, as shown in Figures 3–5.
Ensemble Classifier for Stock Trading Recommendation
Published in Applied Artificial Intelligence, 2022
Support vector machine (SVM) is a statistical learning technique that constructs a hyperplane as the decision surface that maximized the margin of separation of different classes (Cortes and Vapnik 1995). Given a set of training examples (xi,yi), i = 1, 2, …, l. Where xi ∈ Rn and yi ∈{-1, 1} is the label of xi, a standard SVM model is formulated as s.t. yi (<w, xi> +b) ≥ 1, i = 1, 2, …, l; where w is the normal vector of hyperplane, b is a bias value, and <p, q> is the inner product of vectors p and q. The goal of SVM to maximize the margin of separation of different classes is to minimize .