Explore chapters and articles related to this topic
In Search for the Optimal Preprocessing Technique for Deep Learning-Based Diabetic Retinopathy Stage Classification from Retinal Fundus Images
Published in Ranjeet Kumar Rout, Saiyed Umer, Sabha Sheikh, Amrit Lal Sangal, Artificial Intelligence Technologies for Computational Biology, 2023
Nilarun Mukherjee, Souvik Sengupta
In a binary classification settings, the evaluation metrics are based on four basic measurements, namely, true-positive (TP), true-negative (TN), false-positive (FP) and false-negative (FN). For measuring the performance of classification tasks like DR-grading, sensitivity (SN) or recall (RE), specificity (SP), accuracy (ACC), precision (PR), Area under the Receiver Operating Characteristic curve (AUC-ROC) and quadratic weighted kappa (κ) score are commonly used. Quadratic weighted kappa (κ) score is an effective weighted measure, especially in assessing classification accuracy in multiclass classification like DR-grading where datasets suffer from class imbalance problems. Equations (1.1), (1.2) and (1.3) depict the three metrics – Accuracy, Quadratic weighted kappa (κ) score and AUC-ROC used in this work to compare performances of different preprocessing approaches.
Gentle Introduction to Signal Processing and Classification for Single-Trial EEG Analysis
Published in Chang S. Nam, Anton Nijholt, Fabien Lotte, Brain–Computer Interfaces Handbook, 2018
Let us put this into the mathematical framework, which can be used to generalize to more complex cases. Generally, binary classification is formalized by a decision function f: ℝk→{−1,+1} that assigns an observation x to the numbers −1 or +1, which are used as labels for the two classes. In the case of linear classification, the separation can be defined by a hyperplane (which is a “flat” subspace of k − 1 dimensions in the k-dimensional space; special cases are a line in 2D and a plane in 3D space). In this case, f is typically parameterized by its normal vector w and a bias term b. The predicted class label y is given by
Fog Computing and Machine Learning
Published in Ravi Tomar, Avita Katal, Susheela Dahiya, Niharika Singh, Tanupriya Choudhury, Fog Computing, 2023
Kaustubh Lohani, Prajwal Bhardwaj, Ravi Tomar
Supervised learning can be used to address two types of problems. First is the classification problem; second is the regression problem. Classification problem: This problem involves using a supervised learning algorithm to assign data accurately to predefined categories. For example, categorizing an image of an animal into a cat or a dog is a classification problem. Classification can be binary or multi-class classification. In binary classification, the algorithm outputs discrete values 0 and 1, denoting the two different categories in the data. In contrast, multi-class classification algorithms deal with data having more than 2 categories.Popular classification algorithms include K-Nearest-Neighbors (KNN), Support Vector Machines (SVM), Random Forests Classification, Neural Networks, Logistic Regression, Naïve Bayes, and Decision Trees.Regression problem: Regression is a supervised learning problem that involves predicting output with continuous values instead of discrete values like in the classification problem. For example, stock prices are a continuous variable that can range from 1 to infinity and everything in between. So, to design a stock price predictor, the choice of supervised learning algorithm should be regression.Standard regression algorithms include Linear Regression, Lasso Regression, Polynomial Regression, and Support Vector Regression.
Machine intelligence aware electricity theft detection for smart metering applications
Published in Waves in Random and Complex Media, 2023
Shoaib Munawar, Zeeshan Aslam Khan, Naveed Ishtiaq Chaudhary, Nadeem Javaid, Muhammad Asif Zahoor Raja
Model's skewness towards a majority class is a serious issue and needs proper attention. Such skewness issue is due to the model's biasness and imbalanced data. Binary classification is based on two classes. One is labeled as honest class, which is represented by 0, whereas, the fraudulent class is represented by 1. It is utmost necessary to input a balanced data to a classifier. As fraudulent consumers are rare available so data augmentation techniques are applied to balance the number fraudulent and honest consumers before they are passed as input to the classifier. In our scenario, six FDIs are used to manipulate the honest consumers' data. Resultantly, six variants are synthesized for a single honest consumer. The number of the fraudulent consumers is increased by 6 times of the honest consumers, which is still a problem for a model to carry an affine classification. To tackle the issue data augmentation technique is used to balance the number of benign and fraudulent consumers. Initially, 1500 benign consumers are considered, which are manipulated to synthesize the manipulated data. After manipulation through six FDIs total 9000 fraudulent consumers' data is generated. As the number of fraudulent consumers is increased so borderline SMOTE-SVM data augmentation technique is applied to balance the data. After applying the minority class overlapping technique total 18000 consumers are reported where 9000 are fraudulent consumers and 9000 are benign consumers. The balanced data is fed to the classifier for classification scenario.
L-measure evaluation metric for fake information detection models with binary class imbalance
Published in Enterprise Information Systems, 2021
Li Li, Yong Wang, Chia-Yu Hsu, Yibin Li, Kuo-Yi Lin
The most common problem suffered by classification methods used for detecting fake information is binary class imbalance (i.e. less fake information than real information). Although many of the aforementioned methods attempt to solve this problem, a comprehensive and computationally simple evaluation for the performance of these models is required. The evaluation metrics used in binary classification models mainly include precision, recall, specificity, G-mean, -measure (), area under the curve (AUC), and the Matthews correlation coefficient (MCC). The robustness of these evaluation metrics is affected by the imbalance ratio (IR); moreover, some can only be used in some special cases or are computationally demanding. Therefore, this paper proposes a robust and less computational evaluation metric L-measure based on to evaluate the classification performance of models that combines the advantages of metrics discussed previously.
Deep convolutional neural network for three-dimensional objects classification using off-axis digital Fresnel holography
Published in Journal of Modern Optics, 2022
B. Lokesh Reddy, R N Uma Mahesh, Anith Nelleri
The following four evaluation metrics are considered for the binary classification task: accuracy, precision, recall, and F1-score. The accuracy is measured as the ratio of the total number of correct predictions to the total number of all instances. The precision is the ratio of a number of instances correctly predicted as positive to the total number of predictions as positive. The recall is the ratio of a number of instances correctly predicted as positive to the total number of instances which are actually positive. The F1-score is the combination of precision and recall into a single value.