Explore chapters and articles related to this topic
Screening and Diagnostic Tests
Published in Marcello Pagano, Kimberlee Gauvreau, Heather Mattie, Principles of Biostatistics, 2022
Marcello Pagano, Kimberlee Gauvreau, Heather Mattie
The relationship between sensitivity and specificity can be illustrated using a graph known as a receiver-operating characteristic (ROC) curve. An ROC curve is a line graph that plots the probability of a true positive result – the sensitivity of the diagnostic test – against the probability of a false positive result for a range of different cutoff points. These graphs were first used in the field of communications. As an example, Figure 6.4 displays an ROC curve for the data contained in Table 6.1. When an existing test is being evaluated, this type of graph may be used to help assess the usefulness of the test and to determine the most appropriate cutoff point. The dashed line in Figure 6.4 corresponds to a test that gives positive and negative results by chance alone; for example, the test result is determined by flipping a coin. Such a test has no inherent value. The closer the line to the upper left-hand corner of the graph, the more accurate the test. Furthermore, the point which lies closest to this upper corner is usually chosen as the cutoff which maximizes both sensitivity and specificity simultaneously.
Automated Methods for Vessel Segmentation in X-ray Coronary Angiography and Geometric Modeling of Coronary Angiographic Image Sequences: A Survey
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Zijun Gao, Kritika Iyer, Lu Wang, Jonathan Gryak, C. Alberto Figueroa, Kayvan Najarian, Brahmajee K. Nallamothu, Reza Soroushmehr
The receiver operating characteristic (ROC) curve, which illustrates the performance of a binary classifier system as its discrimination threshold is varied, is also often reported. The area under the ROC (AUROC) is used as an evaluation metric, which indicates whether the model can discriminate between cases (positive examples) and non-cases (negative examples.). A perfect classifier is assumed to have the value of one for AUROC.
Meta-Analysis for Evaluating Diagnostic Accuracy
Published in Ding-Geng (Din) Chen, Karl E. Peace, Applied Meta-Analysis with R and Stata, 2021
Additionally, a popular graphical method for evaluating diagnostic accuracy is by the receiver operating characteristic (ROC) curve, which is the graph of a series values of sensitivity against 1-specificity as the cut-off value c runs through all possible values (Yin and Tian 2016). Hence the corresponding ROC curve and the ROC indices (Yin and Tian 2014b) can summarize sensitivity and specificity across all possible diagnostic thresholds (Figure 11.3). The name “receiver operating characteristic” curve means the receivers of the curve can operate at any point on the curve by using the appropriate diagnostic threshold to determine the characteristics of the test (Zhou et al. 2009). There are some ROCs.
Application of supervised machine learning algorithms for the evaluation of utricular function on patients with Meniere’s disease: utilizing subjective visual vertical and ocular-vestibular-evoked myogenic potentials
Published in Acta Oto-Laryngologica, 2023
Phillip G. Bragg, Benjamin M. Norton, Michelle R. Petrak, Allyson D. Weiss, Lindsay M. Kandl, Megan L. Corrigan, Cammy L. Bahner, Akihiro J. Matsuoka
To visualize the actual decision boundary between the classes that each of the four classification models generated, we computed a decision boundary using DesicionBoundaryDisplay in the sklean inspection module of scikit-learn. To determine which classification algorithm most accurately predicts the probability of an example belonging to each class label, we used the area under the curve (AUC) on the receiver operating characteristic (ROC) curve. A ROC curve is a graphical plot generated by plotting the fraction of true positives out of the positives (TPR = true positive rate) vs. the fraction of false positives out of the negatives (FPR = false positive rate), at various threshold settings. A ROC curve can function as a summary measure of performance across potential thresholds for positivity, rather than performance at any specific threshold. TPR and FPR were calculated using the following formula.
Clinical interpretation and cutoff scores for manual ability measured by the ABILHAND questionnaire in people with stroke
Published in Topics in Stroke Rehabilitation, 2023
Elisabeth Ekstrand, Margit Alt Murphy, Katharina S Sunnerhagen
The receiver operating characteristic (ROC) curve analysis was used to determine the optimal cutoff scores. The ROC curve shows the relationship between the sensitivity and specificity for every possible cutoff. The optimal cutoff score is defined by maximizing the sum of sensitivity (true positive rate) and 1 − specificity (false positive rate) from the coordinate points of the ROC curve. The accuracy of the test is measured by the area under the curve (AUC). The AUC equals 0.5 when the ROC curve corresponds to random chance and 1.0 for perfect accuracy. In the present study, the AUC was interpreted as good when >0.80 and excellent when >0.90.42 Furthermore, matching and non-matching groups for the low and good upper extremity functioning levels were calculated and displayed in scatterplots.
Artificial intelligence image recognition of melanoma and basal cell carcinoma in racially diverse populations
Published in Journal of Dermatological Treatment, 2022
Pushkar Aggarwal, Francis A. Papay
Statistical analysis of the CNN output was performed in R software (23). Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and F1 score were calculated for the testing outputs from the CNN models. F1 score is the harmonic average of the sensitivity and PPV, and F1 can be used to assess the performance of machine learning models. Receiver-operating characteristic curves (ROC) were created and their respective area under the curve (AUC) was calculated for BCC and melanoma on both CNN models using R software. ROC is a graphical representation of the performance of the classification model. AUC is the area under the ROC curve, and it summarizes the performance of the model. AUC ranges from 0 to 1 with a higher value indicating a better classification model.