Explore chapters and articles related to this topic
Machine Learning
Published in Ian Foster, Rayid Ghani, Ron S. Jarmin, Frauke Kreuter, Julia Lane, Big Data and Social Science, 2020
Even so, there often is a tradeoff between precision and recall. By selecting different classification thresholds, we can vary and tune the precision and recall of a given classifier. A highly conservative classifier that only predicts a 1 when it is absolutely certain (e.g., a threshold of 0.9999) will most often be correct when it predicts a 1 (high precision) but will miss most instances of 1 (low recall). At the other extreme, a classifier that gives 1 to every data point (a threshold of 0.0001) will have perfect recall but low precision. Figure 7.10 shows a precision–recall curve that is used to represent the performance of a given classifier.
Corrosion Segmentation and Quantitative Analysis Based on Deep Neural Networks
Published in Nigel Powers, Dan M. Frangopol, Riadh Al-Mahaidi, Colin Caprani, Maintenance, Safety, Risk, Management and Life-Cycle Performance of Bridges, 2018
Precision and recall are two common metrics used in classification, where precision describes how the model could find the targets as accurately as possible while recall describes how the model could the targets as many as possible. They are a pair of negatively related metrics in most cases, therefore a threshold is required in classifying predicted conditions. PR curve is used to verify classification threshold, as is shown in Figure 8, and intersection of precision and recall is 0.5, which is selected as classification threshold in our network.
Evaluation of Learner
Published in Peter Wlodarczak, Machine Learning and its Applications, 2019
To fully evaluate the result of a classification task, we need to examine both, precision and recall. In practice, improving precision often reduces recall and vice versa. If the number of false positives decreases, the number of false negatives increase. There is always a tradeoff between precision and recall and it depends on the problem at hand what good values are for both. It is difficult to compare models with high precision and low recall or vice versa.
Automated pothole condition assessment in pavement using photogrammetry-assisted convolutional neural network
Published in International Journal of Pavement Engineering, 2023
Eshta Ranyal, Ayan Sadhu, Kamal Jain
While precision gives the measure of the correct positive predictions, recall is a measure of the correct positive predictions overall positives in the entire data. F1 score is the harmonic mean between precision and recall and is a good measure to evaluate the overall performance of the test or a model as it combines the precision and recall into a single metric. A high F1 score implies high values of recall (low false-negative rate) and high values of precision (low false-positive rate), which is the end goal of any CNN model. Another useful metric is the precision-recall curve that plots precision along the y-axis and recall along the x-axis and shows the trade-off between recall and precision for different probability thresholds. Average precision (Gerard and Michael 1983) or mean average precision (mAP) summarises the area under the precision-recall curve (PR-curve) by sampling precision and recall at separate recall values (Everingham et al. 2010) for different classes. In other words, mAP is an average of precisions obtained over all classes. Since, this study is a single class (pothole) based, thus mAP is calculated as the average of precisions for each data sample. The precision and recall have a straight implication to object detection, where precision depicts how many bounding boxes detected are true objects, while recall represents the number of objects actually detected. A model is considered a good predictive model if, with an increase in recall, the precision stays high. The precision-recall metrics, along with mAP and IoU metrics, are used in this study to evaluate the model’s performance.
Features extraction of MRI image using complex network with low computational complexity to distinguish inflammatory lesions from tumors in the human brain
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Trong Thanh Han, Tung Nguyen Duy, Lam Nguyen Dang Son, Hinh Nguyen Van, Tuan Do Trong, Dung Nguyen Viet, Dung Nguyen Tuan, Luu Vu Dang
Precision is an indicator that describes the goodness of model at correctly predicting the positive class in all positive predictions, recall describes the goodness of model at correctly predicting the positive class in the total number of positive cases, and is the harmonic mean of precision and recall. Based on confusion matrix, ROC curve is built. This is an important parameter to evaluate the effectiveness of the model in the classification problem. AUC – ROC is the effective curve in the classification problem with many parameters. In ROC space, False Positive Rate (FPR) and True Positive Rate (TPR) are used to plot the curve. TPR is the ratio of the number of correct predictions of the positive class of the model to the total number of actual positive classes, FPR is the ratio of the number of false predictions of the positive class of the model to the total number of actual negative classes. AUC is the area under the ROC curve.
Prediction of Sulfur Content in Copra Using Machine Learning Algorithm
Published in Applied Artificial Intelligence, 2021
A. S. Sagayaraj, T. K. Devi, S. Umadevi
Cross entropy and error percentage are calculated to measure the performance of the network. Minimizing cross entropy gives better performance in classification. Error percentage decides the data that is misclassified. The error percentage should be in minimum condition for a good classification. Confusion matrices in Figure 8 are the consequent for training, testing, and validation. Overall it decides the performance of classification of the sulfur in the copra. All Confusion matrix is used to detect TP, FP, TN, and FN visually and to calculate the classification parameters like accuracy, sensitivity, specificity, precision, and recall. From the confusion matrix the accuracy of training is measured as 98.2% and testing is 92 and overall accuracy of all the confusion matrices is 96.5%. From the value of TP, FP, TN, FN, the sensitivity value is 95, specificity value is 97, precision value is 97, and recall value is 95. Training state provides the training record of the data in Figure 9. It has a gradient of 0.025257 at epoch 21. Table 5 describes the accuracy measures of pattern recognition in which TP, FP, TN, and FN are measured to determine the accuracy parameters such as sensitivity, specificity, precision, recall, and F-Measure.