Explore chapters and articles related to this topic
Artificial Intelligence in Assessment and Evaluation of Program Outcomes/Program Specific Outcomes
Published in P. Kaliraj, T. Devi, Artificial Intelligence Theory, Models, and Applications, 2021
The confusion matrix is a table that is often used to describe the performance of a classification model on a set of test data for which the true values are known [3,5]. The generated confusion matrix is provided in Table 12.7. We have 150 samples of which 50 DNMEs, 50 PEs, and 50 EEs. It is noticed that all the correct predictions are located in the diagonal of the matrix. The prediction errors can be easily located in the matric, as they will be represented by values outside the diagonal and the errors seem to be nominal.
A Data-Driven Approach for Multiobjective Loan Portfolio Optimization Using Machine-Learning Algorithms and Mathematical Programming
Published in Ramakrishnan Ramanathan, Muthu Mathirajan, A. Ravi Ravindran, Big Data Analytics Using Multiple Criteria Decision-Making Models, 2017
Sharan Srinivas, Suchithra Rajendran
The accuracy of the classifier is determined by using the actual output and the predicted output by constructing a confusion matrix. A model has high accuracy if the actual outputs are same as the predicted outputs for many instances. As shown in Figure 8.9, the confusion matrix has four categories, namely, true positive, true negative, false positive, and false negative.
Recognition of Types of Arrhythmia: An Implementation of Ensembling Techniques Using ECG Beat
Published in Ranjeet Kumar Rout, Saiyed Umer, Sabha Sheikh, Amrit Lal Sangal, Artificial Intelligence Technologies for Computational Biology, 2023
Arshpreet Kaur, Kumar Shashvat, Hemant Kr. Soni
The results of the work are presented and explained below in the Table 13.2, Table 13.3 and Table 13.4. To evaluate the performance, we have used confusion matrix. The confusion matrix is the table that describes the performance of a classification model on the test data for which the true values are known.
How gaps are created during anticipation of lane changes
Published in Transportmetrica B: Transport Dynamics, 2023
Kequan Chen, Victor L. Knoop, Pan Liu, Zhibin Li, Yuxuan Wang
A confusion matrix is adopted to examine the prediction accuracy. In general, a confusion matrix includes four essential elements, which are (1) true positive (TP), (2) false positive (FP), (3) true negative (TN), and (4) false negative (FN). Based on the four factors, the recall and the false alarm rate (FAR) are computed to evaluate the minority and majority classes, respectively. The recall indicates how many samples in group 1 are correctly detected out of all the group 1 samples. The FAR indicates how many samples in group 2 are wrongly detected as group 1 out of all the group 2 samples. The Receiver Operating Characteristic curve (ROC) curve is constructed in which the horizontal axis represents the FAR, and the vertical axis represents the recall. Note that the threshold to compute the ROC increases from 0 to 1 with an interval of 0.1. The area under the ROC (AUC) is calculated to evaluate the classification capable between group 1 and group 2. The detailed results for model 1 and model 2, including the confusion matrix, recall, FAR, and AUC, are provided in Table 8. The results indicate that the prediction capability of model 2 outperforms model 1.
A risk-based machine learning approach for probabilistic transient stability enhancement incorporating wind generation
Published in Australian Journal of Electrical and Electronics Engineering, 2023
For the classification task, the first step was to select the input and output data for the ANN classification model. System load, fault type, fault location and FCT were used as inputs to the ANN, and Si, was selected as the output. The total number of samples of training data was set to 8000 (500 for each line). The random data division for training, validation and testing was set at 5,600 (70%), 1,200 (15%) and 1,200 (15%), respectively. The Levenberg–Marquardt backpropagation algorithm was used to train the ANN. The number of neurons used in the hidden layer was chosen as 20 (based on a trial-and-error approach). To quantify the performance of the trained classifier, the confusion matrix was used. A confusion matrix is simply a table that is used to define the performance of a classification algorithm. This matrix visualises and summarises the performance of a classification algorithm. It depicts a graphical representation of the number of samples predicted correctly and incorrectly.
A Self-Adaptive Chimp-Driven Modified Deep Learning Framework for Autonomous Vehicles to Obtain Autonomous Object Classification
Published in Electric Power Components and Systems, 2023
A confusion matrix is a table that is often used to evaluate the performance of a classification model. It is also known as an error matrix. A confusion matrix displays the predicted class and the actual class in a tabular format. The predicted class is compared with the actual class to calculate various performance metrics for the model. The matrix consists of four entries, which are TP, TN, FP, and FN. Using these entries, various performance metrics can be calculated, such as accuracy, precision, recall, F1 score, etc. A confusion matrix is a useful tool to evaluate the performance of a classification model and to identify the areas where it needs improvement. The confusion matrix for the obtained classes is shown in Figure 7.