Explore chapters and articles related to this topic
Thermal Imaging for Inflammatory Arthritis Evaluation
Published in U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer, Artificial Intelligence-Based Infrared Thermal Image Processing and Its Applications, 2023
U. Snekhalatha, K. Palani Thanaraj, Kurt Ammer
Optimization of hyperparameter is performed by using cross-validation method to avoid overfitting process. In machine learning problems, we split data into training and testing sets. We split our training data into k number of subsets called folds.
Machine Learning Techniques for Prediction of Diabetes
Published in Punit Gupta, Dinesh Kumar Saini, Rohit Verma, Healthcare Solutions Using Machine Learning and Informatics, 2023
Tarun Jain, Payal Garg, Jalak Yogesh Patel, Div Chaudhary, Horesh Kumar, Vivek K. Verma, Rishi Gupta
The main objective of research paper [11] was to compare various kinds of machine learning classification algorithms for diabetic patients. The algorithms used are naı̈ve Bayes, logistic regression, IBK, support vector machine, OneR, multilayer perceptron, decision tree (J48), Adaboostm1 bagging, and random forest. The ten-fold cross-validation was executed to avoid the problem of overfitting. The highest accuracy,78.01%, was given by logistic regression and the lowest, 70.99%, was given by IBK.
Wound Tissue Classification with Convolutional Neural Networks
Published in Kayvan Najarian, Delaram Kahrobaei, Enrique Domínguez, Reza Soroushmehr, Artificial Intelligence in Healthcare and Medicine, 2022
Rafael M. Luque-Baena, Francisco Ortega-Zamorano, Guillermo López-García, Francisco J. Veredas
The proposed methodology has been illustrated in Figure 8.1 and has been presented in the previous section. To avoid classification bias, a stratified K-fold cross-validation process has been carried out with the number of folds K = 5, taking the 20% of the dataset apart for model testing. Furthermore, this process has been repeated 5 times to increase the robustness of the results. For each fold, there are 67 training images, 23 validation images (in total 80% of the dataset), and 23 test images (20% of the dataset). After the resampling to get the partition of the images in the dataset corresponding to each fold, the process described in section X.3.1 is applied, generating on average 6,652 ROIs for training, 2,362 for validation, and 2,305 for testing.
In silico QSAR modeling to predict the safe use of antibiotics during pregnancy
Published in Drug and Chemical Toxicology, 2023
Feyza Kelleci Çelik, Gül Karaduman
The validation of the QSAR models was made as internal and external validation. Ten-cross validation was applied for all the machine learning models for the internal validation. k-fold cross-validation is a technique that is used to ensure the machine learning models perform the unseen data well. k is a chosen number that split the data sample into k groups with the same number of samples. Once one testing cycle was completed new training and test sets were created to perform the same procedure for the refreshed sets. The process was completed after the cross-validation process is repeated k times. In total, k − 1 of k samples were used for training data, while 1 was kept as a test set to validate the model. The success of the model is the sum of the performances of all classification functions is averaged by dividing by k. In our study, k was set as 10.
Prediction of emergence from prolonged disorders of consciousness from measures within the UK rehabilitation outcomes collaborative database: a multicentre analysis using machine learning
Published in Disability and Rehabilitation, 2023
Richard J. Siegert, Ajit Narayanan, Lynne Turner-Stokes
Models can be “data-fitting” or “cross-validated.” A data fit model uses all the data for model construction, with no samples withheld for testing. A cross-validated model uses only a subset of data for model construction and then tests the robustness of the model against the remaining withheld data. Common methods of cross-validation include leave-one-out (LOO) and x-fold. LOO constructs a model on all the samples except one and then tests the model on that withheld sample, repeated for every sample. This leads to the construction of as many models as there are samples, and the accuracy of prediction of LOO cross-validation (true positives plus true negatives over all samples) is reported at the end of all model evaluations. In x-fold cross-validation, the data is split into x equal-sized sets (typically, 10). A model is constructed on all but one set and then tested against the withheld set. This is repeated for every set with the overall average accuracy reported at the end of all evaluations.
Using a decision tree approach to determine hearing aid ownership in older adults
Published in Disability and Rehabilitation, 2023
Yvonne Tran, Diana Tang, Catherine McMahon, Paul Mitchell, Bamini Gopinath
Performance of the model was validated using a leave-one-out cross-validation technique through leaving one data point out while building the model with the remaining dataset. This was repeated for all the data points. The model is tested against the data point that is left out and test accuracy and errors are calculated for all data points. This validation technique was chosen as it has the advantage of all data points being used for the model and test error rates not being overestimated [29]. Performance of the CART model was measured by an overall accuracy and Cohen’s kappa. Accuracy is a measure of the number of correct classifications made divided by the total number of predictions, expressed as a percentage. Cohen’s kappa measures the degree of agreement after adjusting for chance. In machine learning, the Cohen’s kappa statistic measures the degree of agreement between the model’s prediction and the actual value, it is used to deduce whether the classification from the model is better than, equal to or worse than a random model [30]. Kappa is a measure of reliability and values between 0–0.2 are considered poor or slight, 0.21–0.4 are fair, 0.41–0.60 are considered moderate, 0.61–0.8 are substantial, greater than 0.81 are almost perfect, and 1.0 is perfect [31,32].