Explore chapters and articles related to this topic
Machine Learning Techniques for Prediction of Diabetes
Published in Punit Gupta, Dinesh Kumar Saini, Rohit Verma, Healthcare Solutions Using Machine Learning and Informatics, 2023
Tarun Jain, Payal Garg, Jalak Yogesh Patel, Div Chaudhary, Horesh Kumar, Vivek K. Verma, Rishi Gupta
Generally, there is an inverse relationship between precision and recall. The F1-score incorporates values of precision and recall in itself. It is a harmonic mean of recall and precision, and therefore, it proposes an integrated idea about these two metrics. It reaches peak value when recall is equal to precision.
An Optimal Diabetic Features-Based Intelligent System to Predict Diabetic Retinal Disease
Published in Ayodeji Olalekan Salau, Shruti Jain, Meenakshi Sood, Computational Intelligence and Data Sciences, 2022
M. Shanmuga Eswari, S. Balamurali
Model evaluation is an important step in the creation of a model. It aids in the selection of the best model to represent our data, as well as the prediction of how well the chosen model will perform in the future. Both methods use a test set (not visible to the model) to evaluate model performance in order to avoid over-fitting. Accuracy, precision, and recall are the three basic measures used to evaluate a classification model. The percentage of correct predictions for the test data is known as accuracy. It’s simple to figure out by dividing the number of correct predictions by the total number of projections.
Applied Data Science
Published in Connie White Delaney, Charlotte A. Weaver, Joyce Sensmeier, Lisiane Pruinelli, Patrick Weber, Nursing and Informatics for the 21st Century – Embracing a Digital World, 3rd Edition, Book 3, 2022
Lisiane Pruinelli, Maxim Topaz
Machine learning performance is typically measured by an algorithm's ability to make correct predictions about the outcome of the study. Algorithms implemented in this study achieved good predictive performance in identifying patients at risk. Specifically, the algorithm's performance metrics were: precision = 0.83 (the number of true positives out of the total number of predicted positives); recall = 0.81 (the number of true positives out of the actual number of positives); F score = 0.82 (the weighted harmonic mean of the precision and recall); and area under the precision–recall curve = 0.76 (a single scalar value that measures the overall performance of a binary classifier).
Significance of platelets in the early warning of new-onset AKI in the ICU by using supervise learning: a retrospective analysis
Published in Renal Failure, 2023
Pan Pan, Yuhong Liu, Fei Xie, Zhimei Duan, Lina Li, Hongjun Gu, Lixin Xie, Xiangyun Lu, Longxiang Su
In this study, we used support vector machine, logistic regression, and random forest [16], XGBoost [17]. The four machine learning models were trained and tested on the data, and the effects of the models were compared. We used accuracy, specificity, precision, recall, F1 score, and AUROC (area under the ROC curve). These six evaluation indicators are used to evaluate the performance of the model. Accuracy refers to the proportion of samples with correct predictions to the total samples; precision refers to the proportion of samples with positive predictions that are actually positive; recall refers to the proportion of samples with positive predictions that are actually positive; specificity F1 score = (2 * precision rate * recall rate)/(accuracy rate + recall rate); the ROC (receiver operating characteristic) curve is usually used to identify one sample. The degree of model prediction, AUROC (Area Under ROC Curve), is defined as the area under the ROC curve. In general, the larger the AUROC is, the better the model effect. We randomly divided the data into the training set and the test set at a ratio of 4:1 while maintaining the ratio of positive to negative samples at 1:1. Fivefold cross-validation was used.
Practical foundations of machine learning for addiction research. Part II. Workflow and use cases
Published in The American Journal of Drug and Alcohol Abuse, 2022
Pablo Cresta Morgado, Martín Carusso, Laura Alonso Alemany, Laura Acion
Many tasks are associated with standard performance metrics. In machine learning, accuracy is the default performance metric, measuring the ratio of correct predictions. This metric provides an intuitive estimate of performance but fails to distinguish different kinds of errors. Precision and recall distinguish performance correct predictions from positive predictions and correct predictions across positive cases, respectively, and the F1 measure is a mean of the two. The Area Under the Curve of Receiver Operating Characteristics (AUC ROC) metric provides a more complete assessment of the model because it measures the model’s performance at several thresholds. For a definition of these metrics and their synonyms in applied statistics, see Table 2 and the Glossary.
A review of medical image detection for cancers in digestive system based on artificial intelligence
Published in Expert Review of Medical Devices, 2019
Jiangchang Xu, Mengjie Jing, Shiming Wang, Cuiping Yang, Xiaojun Chen
There are some experiments for the segmentation of esophageal cancers. Fechter et al. [35] raised a random walk method combined with 3D FCN to segment esophagus in CT. And the mean Dice score was 0.76 ± 0.11. The whole process was automatic. Xue et al. [37] proposed an FCN with sem-label (semantic label) and (roi-label) interesting region label to segment esophageal cancer by self-transfer learning. And the segmentation result of IoU (the mean intersection over union) was 77.8%, which can help with the diagnosis. An FCN framework combined with the embedded Class Activation Map was proposed by Garcia-Peraza-Herrera et al. [46] to detect early squamous neoplasia in real-time. And the F1 score (is a measure of a test’s accuracy, which is the harmonic mean of the precision and recall) was 87.3% for this method.