Explore chapters and articles related to this topic
Classification, Rule Generation and Evaluation using Modular Rough-fuzzy MLP
Published in Sankar K. Pal, Pabitra Mitra, Pattern Recognition Algorithms for Data Mining, 2004
Algorithms for classification-rule mining aim to discover a small set of rules from the data set to form an accurate classifier. They are mainly used in predictive data mining tasks such as financial forecasting, fraud detection and customer retention. The challenges to classification rule mining include scaling up to data sets with large number of points and attributes, and handling of heterogeneous data with missing attribute values, and dynamic/time varying data sets.
Particle Morphological Analysis
Published in John Keith Beddow, T. P. Meloy, Advanced Particulate Morphology, 1980
The results show that the degree of success depends on the particular pair of powders used, although in general the results are quite satisfactory. Also notice that no single classification rule is consistently the best for all cases. This suggests further development of other classification rules to be considered for particle classification.
Individual Transition Label Noise Logistic Regression in Binary Classification for Incorrectly Labeled Data
Published in Technometrics, 2022
The goal of classification is to provide a classification rule or a classifier which classifies an observation into the correct class as much as possible. Most classification procedures learn a classifier from a given training dataset. A successful classification method provides its classifier that is generalized well to new observations from the same population. Good performance of classifier is guaranteed when training data is a random sample from population distribution. While their good quality is essential for learning process, the given training data are not always obtained without errors. In classification, class labels in training data are sometimes incorrectly recorded. Label noise refers to systematic or probabilistic error mechanism that makes the observed label different from its true label (Frénay and Verleysen 2014). For binary classification problem, label noise sets positive label on negative label or vice versa. Any classifier cannot be generalized to new observations if it is learned on training dataset having incorrect labels.
Multilinear Principal Component Analysis with SVM for Disease Diagnosis on Big Data
Published in IETE Journal of Research, 2022
In 2016, Dewan et al. [19] have developed a novel classifier for multi-class classification of biological data. The major issue reflected in this investigation were overfitting, noisy instances as well as class-imbalance data. The developed rule-based classifier has utilized two classification model decision tree and k-nearest-neighbor algorithms. Here, decision trees were adopted for introducing classification rule, while k-nearest-neighbor was used for analysing the miss-classified instances as well as eliminating vagueness among the contradictory rules. The performance of developed classifier was evaluated by comparing it with well-approved conventional machine learning as well as data mining algorithms on genomic data. The investigational outcomes have indicated that the developed classifier was more superior than other methods.
Severity prediction of motorcycle crashes with machine learning methods
Published in International Journal of Crashworthiness, 2020
Classification rule learning is a rule-based classifier which uses a systematic approach to build classification models from an input data set for classifying records using a collection of ‘if… then…’ rules [46]. Among the classification rule algorithms in WEKA, rule induction (PART) was chosen after experimenting with the rule-based classifiers. The PART is an amendment of RIPPER and C4.5 algorithms and draws tactics from both. It is a rule-based algorithm and used a set of if-then rules to classify data. The PART implements the decision tree approach of C4.5 and associates it with the divide-and-conquer strategy of RIPPER [47]. For more information on PART, see Ref. [48].