Explore chapters and articles related to this topic
Production-Level Case Study: Automated Visual Inspection of a Laser Process
Published in Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza, Industrial Applications of Machine Learning, 2019
Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza
In the construction of the AVI system, however, we found that the only available examples were from correctly processed cylinders. This scenario is very common in manufacturing inspection applications because significant aberrations rarely occur in efficient industrial processes (Timusk et al., 2008). This makes it difficult to train automated systems that rely on statistical learning because they require datasets with examples of faulty situations, balanced, whenever possible, against examples of normal conditions (Jäger et al., 2008; Surace and Worden, 2010). When errors have not been reported during the training stage, the classification task of discerning between normal and anomalous products can be performed using one-class classification. One-class classification is an anomaly detection technique used in machine learning to solve binary classification problems when all the labeled examples belong to one of the classes (Chandola et al., 2009).
Machine Learning Based Hospital-Acquired Infection Control System
Published in Shampa Sen, Leonid Datta, Sayak Mitra, Machine Learning and IoT, 2018
Sehaj Sharma, Prajit Kumar Datta, Gaurav Bansal
Normally, a classification method is two-class classification, but the one-class approach is particularly attractive in situations where cases from one class are difficult to obtain for model construction, which is the case of NI. The one-class classification is the ability to separate between new cases similar to members of the training set and all other cases that can occur. While it looks similar to conventional classification problems (i.e., two-class), one-class classification has a different way to train the classifier. The classifier of one-class classification is trained only by cases from the majority class, and it is never trained from the cases of the minority class. The purpose for the classifier is to develop the boundary separating those two classes based only on data that lies on one side of it (on which the classifier is trained). It must therefore estimate the boundary separating those two classes based only on data which lie on one side. The objective is to identify patients with one or more Nis, based on clinical and other data collected during the survey.
Support Vector Machine
Published in Paresh Chra Deka, A Primer on Machine Learning Applications in Civil Engineering, 2019
SVM can be formulated for one-class problems. It is called one-class classification and is applied to clustering and detection of outliers for both pattern classification and function approximation. Conventional clustering methods, like k-means clustering algorithm and fuzzy-cthe means clustering algorithms, can be extended to the feature space. The domain description defines the region of data by a hyperplane in the feature space. The hyperplane in the feature space corresponds to clustered regions in the input space. Hence, the domain description can be used for clustering. If there are no outliers, that means all the data are in or on the hyperplane; then the problem will be to determine the clusters in the input space.
Support tensor data description
Published in Journal of Quality Technology, 2021
Classification attempts to assign a new object represented by a vector of feature values to one of a set of classes that are known beforehand. The classifier that performs this classification operation (or that assigns to each input object an output label) is based on a set of example objects. The one-class classification problem differs in one essential aspect from the conventional classification problem. One-class classification can be thought as a special type of two-class classification, where data only from one class, the target class, is available for training the classifier (referred to as one-class classifier). This means that only example objects of the target class can be used and that no information about the other class of outlier objects is present. The boundary between the two classes has to be estimated from data of only the normal, genuine class. The task is to define a boundary around the target class, such that it accepts as much of the target objects as possible, while it minimizes the chance of accepting outlier objects.
Change detection using least squares one-class classification control chart
Published in Quality Technology & Quantitative Management, 2020
The problem of classification tries to assign a new object represented by a vector of feature values to one of the set of classes which are known beforehand. The classifier which should perform this classification operation (or which assigns to each input object an output label), is based on a set of example objects. The one-class classification problem differs in one essential aspect from the conventional classification problem. One-class classification can be thought as a special type of two-class classification problem, where data only from one class, the target class, are available for training the classifier (referred to as one-class classifier). This means that just example objects of the target class can be used and that no information about the other class of outlier objects is present. The boundary between the two classes has to be estimated from data of only the normal, genuine class. The task is to define a boundary around the target class, such that it accepts as much of the target objects as possible, while it minimizes the chance of accepting outlier objects.
O-PCF algorithm for one-class classification
Published in Optimization Methods and Software, 2020
The one-class classification problem differs in one essential aspect from the conventional classification problem. In one-class classification, it is assumed that information on only one of the classes, the target class, is available. This means that only example objects from the target class can be used and that no information about the other class, namely, the class of outlier objects, is present. The boundary between the two classes must be estimated from data corresponding to only the target (normal, genuine) class. The task is to define a boundary around the target class such as many as possible of the target objects are accepted, while the chance of accepting outlier objects is minimized [25]. The term one-class classification originates from Moya [20]. This type of classification is also known as concept learning [15], outlier/novelty detection [3,34], anomaly detection [10,33] or single-class classification [30]. Although the classifiers are typically trained only on the target class, there are some classification algorithms that use the poorly sampled outlier class (the complementary set to the target class) or unlabelled data in addition to the target class [31]. The problem of building text classifiers using positive and unlabelled examples is studied and the biased Support Vector Machines (SVM) algorithm is presented in [18]. In [9], Elkan and Noto studied identifying protein records problem and used only positive and unlabelled data in order to construct the classifier. One-class classification algorithms offer solutions to important problems. For instance, for very rare diseases, it can be difficult to obtain patient data; similarly, acquiring fault information that must be obtained to enable the advance detection of machine failures can be difficult. Similar difficulties also arise in object recognition, document classification, spam detection, speaker classification, etc.