Explore chapters and articles related to this topic
Support Vector Machine
Published in Paresh Chra Deka, A Primer on Machine Learning Applications in Civil Engineering, 2019
Statistical learning refers to a vast set of mathematical implementations for understanding data. These implementations can be classified as supervised or unsupervised. Broadly speaking, supervised statistical learning involves building a statistical model for predicting, or estimating, an output based on one or more inputs. Problems of this nature occur in fields as diverse as business, medicine, astrophysics, and public policy. With unsupervised statistical learning, there are inputs but no supervising output; nevertheless, we can learn relationships and structure from such data. The main goal of statistical learning theory is to provide a framework for studying the problem of inference (i.e., gaining knowledge, making predictions, making decisions and constructing models from a set of data). This is studied in a statistical framework; that is, there are assumptions of a statistical nature about the underlying phenomena (in the way the data is generated). Indeed, a theory of inference should be able to give a formal definition of words like learning, generalization, and overfitting, as well as to characterize the performance of learning algorithms so that, ultimately, it may help design better learning algorithms. There are two goals: make things more precise and derive new or improved algorithms.
WiFi Location Fingerprinting
Published in Hassan A. Karimi, Advanced Location-Based Technologies and Services, 2016
Other pattern-matching algorithms have been suggested for use with WiFi location fingerprinting. These include Bayesian inference, statistical learning theory, support vector machines, and neural networks (see, for example, Battiti et al. 2002).
A framework of developing machine learning models for facility life-cycle cost analysis
Published in Building Research & Information, 2020
Xinghua Gao, Pardis Pishdad-Bozorgi
The Support vector machines (SVM) regression is another commonly used method of error-based machine learning for predictive analytics. The Support Vector algorithm is a nonlinear generalization of the Generalized Portrait algorithm (Smola & Schölkopf, 2004). It is grounded in the framework of statistical learning theory, characterizing properties of learning machines that enable them to generalize well to unseen data (Smola & Schölkopf, 2004). SVMs are a specific class of algorithms that are characterized by ‘usage of kernels, absence of local minima, sparseness of the solution, and capacity control obtained by acting on the margins, or on the number of support vectors’ (Gelfusa et al., 2015). SVMs map input vectors into a high dimensional feature space, where a maximal margin hyperplane is constructed (Chapelle & Vapnik, 2000). It is possible to apply SVMs to regression problems by introducing an alternative loss function that is modified to include a distance measure (Dibike, Velickov, & Solomatine, 2000; Smola, 1996; Smola & Schölkopf, 2004).
Classifying random variables based on support vector machine and a neural network scheme
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2022
This section provides the short review on standard SVM for classification. Statistical learning theory deals primarily with supervised learning problems. Given an input (feature) space and an output (label) space that are shown with and , respectively.