Explore chapters and articles related to this topic
Machine Learning – Supervised Learning
Published in Rakesh M. Verma, David J. Marchette, Cybersecurity Analytics, 2019
Rakesh M. Verma, David J. Marchette
The data were drawn from two multivariate normal distributions, and so the Bayes error is not zero, but in this case the Bayes classifier is a linear classifier. In fact, it is the bisector of the line segment between (0, 0) and (2, 2). This suggests one method for defining the linear classifier, as this bisector of the segment joining the two means of the data from the two classes. This works, provided the covariances are equal and a multiple of the identity matrix, and one knows (or estimates) the means. More generally, there are several algorithms for fitting a linear classifier. It is instructive to write the formulas for two normal distributions with equal covariances, equate them, and solve for the corresponding hyperplane. This reduces to a quadratic equation, and is relatively straightforward to solve. In fact, in the case of equal covariances, the quadratic terms cancel, and the resulting equation is linear. Note that if we don’t assume equal covariances, we still obtain a quadratic equation, and the resulting classifier is called a quadratic classifier.
Genetic Syndrome Identification: An Image Processing Approach
Published in IETE Journal of Research, 2022
Archana P. Ekbote, Varsha R. Ratnaparkhe
In this work, the database is of normal and special children, hence two class classifications, namely normal face images and dysmorphic face images were performed. Classification is assigning a class label to each sample. Partitioning the feature space and defining discriminant function achieve classification. A discriminant function takes an input feature vector and assigns it to one of the classes, denoted by . Depending on the nature of discriminating function; classifiers can be categorised as linear or nonlinear. In linear classifier, discriminant function is a linear combination of feature coefficients, while in nonlinear classifier; discriminant function is a nonlinear combination of feature values. In Ref. [13], the simplest representation of a linear discriminant function is obtained by taking a linear function of the input vector, where is called a weight vector, and is a bias. In two-class classifier, an input vector is assigned to class C1 if and to class C2 otherwise. The corresponding decision boundary is therefore defined by the relation . A linear decision boundary is generated by fitting class conditional densities to the data and using Bayes’ rule [14]. The nonlinear decision boundary is also generated by fitting class conditional densities to the data and using Bayes’ rule. There are number of linear classifiers such as minimum distance, maximum likelihood, support vector machine, normal densities-based linear classifier also called Linear Discriminant Classifier (LDC) and nonlinear classifiers such as KNN, Neural networks, Bayesian networks, Parzen, decision tree, normal densities-based quadratic classifier also called Quadratic Discriminant Classifier (QDC). In this paper, LDC, QDC, and Support Vector Machine with Radial Basis Function kernel (RBF-SVM) are used in classification. The output of classifiers decides the discrimination ability of features.