Explore chapters and articles related to this topic
Vibration-Based Condition Monitoring in Rotating Machinery
Published in Rajiv Tiwari, Rotor Systems: Analysis and Identification, 2017
Martin and Thorpe (1992) proposed monitoring of rolling bearings based on the normalized spectra. They utilized the envelope method and described an expansion of the same by normalizing the frequency spectra of the faulty rolling bearing by the no defect bearing. This provided much greater sensitivity to diagnose the defect frequencies. Jack and Nandi (2002) presented the fault identification of roller bearings by using the SVM and the ANN. They utilized several statistical features and relied on moments and cumulants, and the GA to choose the optimum feature. They concluded that the ANN had the potential to train faster and generalize, and had slightly more robustness than the SVM (constant width SVM and averaged width SVM) at that point of time. However, they recommended that the SVM could perform better in the near future with new procedures. Rojas and Nandi (2005) presented the expansion of the SVM for the fault diagnosis of rolling bearings. They utilized the sequential minimal optimization (SMO) algorithm to train the SVM and a means to select a suitable training variable. They showed a comparison of the SVM with the ANN and observed substantial enhancement in the performance of the SVM over the ANN. Yu et al. (2007) provided an effective fault diagnosis of roller bearings by using the intrinsic mode function (IMF) envelope spectrum and the SVM. The empirical mode decomposition (EMD) procedure was utilized to decompose the original signal into several IMFs, then they defined characteristic amplitude ratios at diverse fault characteristic frequencies in the envelope spectra of some crucial IMFs and utilized these as an input to the SVM.
Signature Generation Algorithms for Polymorphic Worms
Published in Mohssen Mohammed, Al-Sakib Khan Pathan, Automatic Defense Against Zero-day Polymorphic Worms in Communication Networks, 2016
Mohssen Mohammed, Al-Sakib Khan Pathan
Training the SVM is done by solving an Nth-dimensional QP problem, where N is the number of samples in the training dataset. Solving this problem in standard QP methods involves large matrix operations, as well as time-consuming numerical computations, and is mostly slow and impractical for large problems. Sequential minimal optimization (SMO) is a simple algorithm that can, relatively quickly, solve the SVM QP problem without any extra matrix storage and without using numerical QP optimization steps. SMO decomposes the overall QP problem into QP subproblems.
Big Data in Medical Image Processing
Published in R. Suganya, S. Rajaram, A. Sheik Abdullah, Big Data in Medical Image Processing, 2018
R. Suganya, S. Rajaram, A. Sheik Abdullah
Sequential Minimal Optimization (SMO) is an algorithm for proficiently solving the optimization problem which arises during the training of SVM samples. It was introduced by John Platt in 1998 at Microsoft research. SMO is widely used for training SVM. SVM techniques separate the data belonging to different classes by fitting a hyperplane between them which maximized the partition. The data is mapped into a higher dimensional feature space where it can easily partitioned by a hyperplane. SVM-Linear kernel is adopted for predicting heart diseases using WSNs dataset.
Sequential minimal optimization for local scour around bridge piers
Published in Marine Georesources & Geotechnology, 2022
C. Kayadelen, G. Altay, S. Önal, Y. Önal
Support vector machine (SVM) proposed by Cortes and Vapnik (1995) is preferred due to its ability to offer an advantage in generalization performance for solving pattern recognition and complex regression problems. SVM that uses Lagrangian to solve the optimization problem is simply defined as a hyperplane between a set of positive data and a set of negative data. Sequential minimal optimization, or SMO an algorithm was developed by Platt (1998) to train SVM models. It converts a very large quadratic programming (QP) optimization problem to the smallest possible QP problems which can be solved analytically. This feature of SMO provides a faster solution in numerical QP optimization than the chunking algorithm that used conventionally to train the SVM (Platt 1998). SMO starts with the initial two Lagrange multipliers and continues until optimal values of these multipliers’ values are found. SVM is redesigned according to these values. One of the advantages of the SMO algorithm is that extra matrix storage is not needed in the training process of SVM. SMO has two stages. In the first stage, the two Lagrange multipliers are solved with an analytic method. In the second stage, the multipliers are chosen and optimized heuristically. Smola and Schölkopf (1998) proposed modification based on an iterative learning algorithm for the SMO algorithm to solve the regression problems. The primary aim of SVM regression analysis is to find a function for hyperplane (Equations (9) and (10)), f(v) which shows optimal target values (ti) for every input vector (vi). This is done by training a set (vi, ti) that includes n data points (Platt 1998).
A data-driven method for enhancing the image-based automatic inspection of IC wire bonding defects
Published in International Journal of Production Research, 2021
Junlong Chen, Zijun Zhang, Feng Wu
In developing a SVM (Cortes and Vapnik 1995) model, three kernels, the polynomial kernel, Gaussian kernel, and normalised polynomial kernel, are evaluated via a number of trials, and the polynomial kernel of degree 1 is selected. The SVM is trained with a batch size of 100. The Sequential Minimal Optimization (SMO) (Platt 1998) is employed to optimise other parameters of the SVM model.