Explore chapters and articles related to this topic
Artificial Intelligence Methodologies in Dentistry
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Reza Soroushmehr, Winston Zhang, Jonathan Gryak, Kayvan Najarian, Najla Al Turkestani, Lucia Cevidanes, Romain Deleat-Besson, Celia Le, Jonas Bianchi
In the past three decades, with the aforementioned challenges to delineate images, many automated and semi-automated techniques with applications in Dentistry have been developed. Silva et al. reviewed segmentation methods employed in dental imaging applications (Silva et al., 2018) and categorized them to region-based, threshold-based, cluster-based (e.g., Fuzzy C-means), boundary-based (e.g., edge detection, active contour), and watershed-based methodologies. They evaluated these methods using accuracy, specificity, precision, recall, and f-score metrics.
Investigation of IoMT-Based Cancer Detection and Prediction
Published in Meenu Gupta, Rachna Jain, Arun Solanki, Fadi Al-Turjman, Cancer Prediction for Industrial IoT 4.0: A Machine Learning Perspective, 2021
Meet Shah, Harsh Patel, Jai Prakash Verma, Rachna Jain
To evaluate the different pre-trained models, we use class-wise precision (eq 1.1), recall (eq 1.2), F1 score (eq 1.3), and overall accuracy (eq 1.4), as shown in Tables 1.2 and 1.3.Here, TP = True Positive, i.e., the number of instances where the model correctly classified positive class as the positive class. FP = False Positive, i.e., the number of instances where the model incorrectly classified negative class as the positive class.Here, FN = False Negative, i.e., the number of instances where the model incorrectly classified positive class as the negative class.The F1 score is essentially a harmonic mean of precision and recall. We use the F1 score as a way of combining both precision and recall.Here, TN = True Negative, i.e., the number of instances where the model correctly classified negative class as the negative class.
Swarm Intelligence and Evolutionary Algorithms for Heart Disease Diagnosis
Published in Sandeep Kumar, Anand Nayyar, Anand Paul, Swarm Intelligence and Evolutionary Algorithms in Healthcare and Drug Development, 2019
F Score is defined as the twice the ratio between product of recall and precision factors to the sum of recall and precision factors.
Significance of platelets in the early warning of new-onset AKI in the ICU by using supervise learning: a retrospective analysis
Published in Renal Failure, 2023
Pan Pan, Yuhong Liu, Fei Xie, Zhimei Duan, Lina Li, Hongjun Gu, Lixin Xie, Xiangyun Lu, Longxiang Su
In this study, we used support vector machine, logistic regression, and random forest [16], XGBoost [17]. The four machine learning models were trained and tested on the data, and the effects of the models were compared. We used accuracy, specificity, precision, recall, F1 score, and AUROC (area under the ROC curve). These six evaluation indicators are used to evaluate the performance of the model. Accuracy refers to the proportion of samples with correct predictions to the total samples; precision refers to the proportion of samples with positive predictions that are actually positive; recall refers to the proportion of samples with positive predictions that are actually positive; specificity F1 score = (2 * precision rate * recall rate)/(accuracy rate + recall rate); the ROC (receiver operating characteristic) curve is usually used to identify one sample. The degree of model prediction, AUROC (Area Under ROC Curve), is defined as the area under the ROC curve. In general, the larger the AUROC is, the better the model effect. We randomly divided the data into the training set and the test set at a ratio of 4:1 while maintaining the ratio of positive to negative samples at 1:1. Fivefold cross-validation was used.
Clinical risk assessment of chronic kidney disease patients using genetic programming
Published in Computer Methods in Biomechanics and Biomedical Engineering, 2022
Arvind Kumar, Nishant Sinha, Arpit Bhardwaj, Shivani Goel
For the considered data-set, four mentioned techniques are used to classify CKD and not-CKD patients. We performed two separate experiments. In the first experiment, we removed all missing values of patients’ data (CKD-158). In the second experiment, we replace missing values with the mean of that column (CKD-400). We performed ten-fold cross-validation (10-CV) for each scenario. The proposed approach is implemented in python3 on Windows 10 machine. Python NUMPY, PANDAS, SKLEARN, and PYSWARM libraries are used in this implementation. For KNN value of K is taken as 3. For PSO, the population size is taken as 30, and the maximum generation is set to 200. In addition, cognitive and Social parameters (c1 & c2) are set to 2. For performance comparison, we calculated the accuracy, sensitivity (recall), specificity, F1-score, and the area under the ROC curve (AUC) (Cuadros-Rodríguez et al. 2016).
Adaptive kernel scaling support vector machine with application to a prostate cancer image study
Published in Journal of Applied Statistics, 2022
Results of classification on three data sets are reported as the average of the hundred repetitions in Table 4. The proposed method has the smallest test errors among all classifiers on all three data sets, with similar training errors compared with other classifiers. The F-score from our proposed method is greater than other classifiers, though not that much. All the methods tend to estimate the recall rate very close to 1, while the proposed method shows the greatest precision rate. The standard errors of the estimated F-score and the test error rate from the proposed method and other classifiers are similar respectively (so not reported), mush smaller those from the logistic regression. The results on the real KEEL data sets have confirmed the superb performance of the proposed method.