Explore chapters and articles related to this topic
Cooperative Data Fusion for Advanced Monitoring and Assessment in Healthcare Infrastructures
Published in Pietro Salvo, Miguel Hernandez-Silveira, Krzysztof Iniewski, Wireless Medical Systems and Algorithms, 2017
Vasileios Tsoutsouras, Sotirios Xydis, Dimitrios Soudris
In a coarse manner, classification algorithms can be categorized in respect to the data set characteristics, for example, linear separable versus nonlinear separable data. Linear classifiers include classification algorithms that are able to manage data where all input vectors belong to classes that are distinct enough for a single line to separate them. Figure 7.4a illustrates such an example, while Figure 7.4b shows the case of a nonlinear separable data set [3]. The points in blue belong to class A while the points in red belong to class B. As depicted in Figure 7.4b, the distribution of data makes it infeasible to determine one line to separate the data in different classes. When working with greater dimensions, where we have input vectors with three or more data, instead of lines, hyperplanes are considered to distinguish between the different classes, but the classes are still perceived as linearly separable. It is important to state that a classifier can always make a classification decision. The goal is to discover and optimize the classifier that minimizes the classification error.
Trust-Based Sentimental Analysis and Online Trust Evaluation
Published in Gulshan Shrivastava, Sheng-Lung Peng, Himani Bansal, Kavita Sharma, Meenakshi Sharma, New Age Analytics, 2020
S. Rakeshkumar, S. Muthuramalingam
For the purpose of classification of the given input data into various classes, linear classifiers are applied. Some of the linear classifiers include neural network (NN) (Kalaiselvi et al., 2017) and support vector machine (SVM). SVM creates hyperplane with maximum distance for the trained sample set. Hence, in addition to speech recognition, SVM (Singh et al., 2018) is applied in text classification also. NN (Mahajan and Dev, 2018) operates with multiple neurons at various levels. The neurons carry the output at each layer. The output of each layer is fed as input to the next layer. The data for the neurons are complex as it is very tough for the correction of values as it has to be back-tracked to various layers behind for correction.
Brain–Computer Interface
Published in Chang S. Nam, Anton Nijholt, Fabien Lotte, Brain–Computer Interfaces Handbook, 2018
Chang S. Nam, Inchul Choi, Amy Wadeson, Mincheol Whang
Linear classifiers are discriminant algorithms that use a linear function to classify the data into mutually exclusive and exhaustive classes, assuming that the data come from a Gaussian mixture model. Because of their structural simplicity, competitive accuracy, and very fast training and testing, linear classifiers are one of the most popular algorithms used to design BCI applications. Two main kinds of linear classifier are described: linear discriminant analysis (LDA) and support vector machine (SVM).
Machine learning approaches for anomaly detection of water quality on a real-world data set*
Published in Journal of Information and Telecommunication, 2019
Fitore Muharemi, Doina Logofătu, Florin Leon
Support Vector Machine (SVM) is a supervised machine learning algorithm introduced by Boser et al. (Boser, Guyon, & Vapnik, 1992), which can be used for both classification and regression problems. SVM is among the best ‘off-the-shelf’ supervised algorithm. It is a linear two-class classifier. The idea of the linear classifier is to find a hyperplane that can classify data points appropriately. SVM has the ability to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined apriori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons (Sapankevych & Sankar, 2009). It is also very good for binary classification, so it was expected to be good for our experiment. We tested different variants using SVM, by changing the parameters (kernel, cost, gamma). After parameters tuning, kernel=‘rbf’, cost=500 and gamma=1 achieved one of the best F1 score.
Automatic Text Summarization of Konkani Folk Tales Using Supervised Machine Learning Algorithms and Language Independent Features
Published in IETE Journal of Research, 2021
We observe from the results of the evaluation that the linear models, Ridge Regression Classifier and Support Vector Linear Classifier, performed better than the non-linear models, viz. CART Decision Tree Classifier and Random Forest. Linear models, i.e. Ridge Regression Classifier and Support Vector Linear Classifier, are simpler and make use of a linear function in order to make predictions and work well when the data presented has a linearly separable nature. Non-linear models, i.e. CART Decision Tree Classifier and Random Forest, are typically more complex and can apply to problems where data may not be linearly separable, although they can be used for linearly separable data. Non-linear models are also more likely to overfit as compared to linear models. Problems such as text classification usually involve the use of many features, as in our case we have 13 distinct features and this presents high dimensionality space. When data is projected to high dimensionality spaces, they are likely to be linearly separable. The linear models can approximate a function that can make better binary class predictions, leading to overall better selection of sentences for summary formation, which is reflected by the ROUGE scores. Non-linear models, owing to being more complex, are prone to fitting to the training data too well and thus reducing the generalizability of the model on test data and this affects the predictions resulting in them not performing and the linear models as reflected in the ROUGE scores. We also observe that among the non-linear models Random Forest performed better than Decision Trees; this is due to it being an ensemble technique, it is able to reduce overfitting, variance, thus produces a higher ROUGE score than Decision Trees.
Convex Bidirectional Large Margin Classifiers
Published in Technometrics, 2019
Using a linear combination of the variables, linear classifiers are one of the most widely used classification tools. Training and testing procedures for linear classifiers are relatively efficient compared with nonlinear ones, especially in high-dimensional spaces. Moreover, due to the simple linear form, the corresponding interpretation can be straightforward. Despite its simplicity, however, a linear classifier may fail to handle classification problems with nonlinear boundaries and thus the prediction performance can be suboptimal.