Explore chapters and articles related to this topic
Classification of Diabetic Retinopathy by Applying an Ensemble of Architectures
Published in Rohit Raja, Sandeep Kumar, Shilpa Rani, K. Ramya Laxmi, Artificial Intelligence and Machine Learning in 2D/3D Medical Image Processing, 2020
During training, validation accuracy is evaluated after each epoch using the set of validation images. This metric is used for the tuning of hyper-parameters and further analyzing the performance of the network on new images using testing set. The final performances of all networks are evaluated on the same testing set. The three standard networks are used to develop an ensemble based on the weighted sum assigned to each architecture. The performance of ensemble and the three networks is shown in Table 7.1. ROC curves of various networks are compared and plotted in Figure 7.6. The classifier with ROC curve having higher AUC is considered more accurate as compared to others. Generally, a classifier having AUC larger than 0.9 is considered a great performer. It can be said by analyzing different computation metrics that the presented ensemble method works better than individual networks by analyzing individual networks.
Assessment of Accuracy for Soft Classification
Published in Anil Kumar, Priyadarshi Upadhyay, A. Senthil Kumar, Fuzzy Machine Learning Algorithms for Remote Sensing Image Classification, 2020
Anil Kumar, Priyadarshi Upadhyay, A. Senthil Kumar
The receiver operating characteristic (ROC), which is based on the Neyman-Pearson detection theory, is used for the evaluation of detection performance in signal processing, communication, and medical diagnosis (Chang et al., 2001; Wang et al., 2005; Miyamoto et al., 2008; Chang, 2010). The ROC curve is used to illustrate the performance of a binary classifier system, which means whether a class is detected or (‘hit’) or not (‘miss’). The detection is measured by the area under the Neyman Pearson curve. The area is denoted by Az and bounded between ½ and 1. For better detection, it should be closer to 1 (Wang et al., 2005). The 2D ROC curve is plotted by the false alarm rate (FAR) on one axis (x-axis) and true positive (TP) rate on another axis (y-axis). On the other hand, the 3D ROC curve is plotted by taking the false alarm rate (FAR) on the x-axis, detection threshold (t) on the y-axis, and true positive (TP) rate on the z-axis (Figure 7.1). The 2D ROC can be used for hard decision produced by the classifier, whereas 3D ROC can be used for the soft decision (Wang et al., 2005).
Repository and interpretation
Published in Michael O’Byrne, Bidisha Ghosh, Franck Schoefs, Vikram Pakrashi, Image-Based Damage Assessment for Underwater Inspections, 2019
Bidisha Ghosh, Michael O’Byrne, Franck Schoefs, Vikram Pakrashi
The ROC curves offer a convenient way of characterizing the performance of NDT methods under various environmental conditions (Rouhan & Schoefs, 2003) and have been expanded to image detection (Pakrashi et al., 2010). A ROC curve is a plot of the true positive rate (sensitivity) versus the false positive rate (1-specificity) that are obtained when varying an input parameter for a given technique. Sensitivity, or the probability of detection in the field of probability space and decision theory, measures the proportion of pixels that are correctly identified as representing damage. Specificity measures the proportion of non-damaged pixels that are correctly identified as representing non-damage. Throughout this book, the sensitivity and specificity are determined by comparing the damaged regions detected using an image-based technique with a visually segmented image. The visually segmented image is created by a human operator who must manually identify damaged regions in an image. This visually segmented image acts as the control as it is assumed it shows the true extent of damage. The visually segmented image only needs to be created when it is wished to gauge the performance levels of the technique under scrutiny. Each (1-specificity, sensitivity) pair forms a coordinate in the ROC space that corresponds to a particular decision threshold.
Investigating major cause of crashes on Indian expressways and developing strategies for traffic safety management
Published in International Journal of Crashworthiness, 2022
Abdul Basit Khan, Rajat Agrawal, S.S Jain
The test results of the model are said to be valuable only if sensitivity and specificity when added give a value more than 1.5 (Midway between 1 i.e. useless and 2 i.e. best) [34]. Moreover, Receiver operating characteristic (ROC) curve was plotted between sensitivity (True positive rate) and Specificity (True negative rate), and Area Under Curve (AUC) was also evaluated. ROC curve is a fundamental tool for diagnostic test evaluation and is a plot of the true positive rate (sensitivity) against the false positive rate (specificity) for the different possible cut-off points of a diagnostic test. According to Mandrekar J.N. (2010) AUC value ranging from 0.7 to 0.8 is said to be acceptable, ranging from 0.8 to 0.9 is said to be excellent and more than 0.9 is said to be outstanding [34].
Data-driven Detection and Early Prediction of Thermoacoustic Instability in a Multi-nozzle Combustor
Published in Combustion Science and Technology, 2022
Chandrachur Bhattacharya, Jacqueline O’Connor, Asok Ray
For each method there are a variety of parameters that need to be tuned, namely, window length for all of the three methods (i.e., FFT, STSA, and HMM), downsampling parameters for both STSA & HMM, and alphabet size, , for STSA and number, , of hidden states for HMM. Furthermore, there is an optimal threshold for each parameter combination, which yields the best performance. As mentioned earlier, the optimal threshold is computed as the threshold yielding the least errors (i.e., least number of misclassified data windows from the ROC curves). Thus, there is no “global threshold” that work across all the parameter sets, and instead the optimal threshold values is a strong function of the parameters, which must be determined from a training set before implementation of the algorithm. Optimal values of the three thresholds, , , and , are obtained from the respective ROC plots of FFT, STSA, and HMM, as listed in Table 6. In the following sub-section, results corresponding to a particular set of parameters and the corresponding optimal threshold have been discussed that elucidates this point further.
Real-time traffic incident detection based on a hybrid deep learning model
Published in Transportmetrica A: Transport Science, 2022
Linchao Li, Yi Lin, Bowen Du, Fan Yang, Bin Ran
The preceding sections introduced the two key parts of our hybrid model, the GAN and the TSSAE. The architecture of the proposed hybrid model is shown in Figure 5. The GAN is first applied to generate new incident samples using the selected spatial and temporal variables. Then, the new datasets containing newly generated incident samples are used as the input to the TSSAE. The last step is to evaluate the performance of the proposed model. In this study, we apply four criteria: detection rate (DR), false alarm rate (FAR), classification rate (CR) and the area under the curve (AUC). DR indicates the proportion of incidents correctly detected. A higher DR represents a more accurate model. However, a model with higher DR may also be overly sensitive, that is, it falsely detects more incidents (Asakura et al. 2017). Therefore, another criterion, FAR, is introduced to evaluate model accuracy. AUC is the area under the receiver operating characteristic (ROC) curve, which represents the classification ability of the model as the discrimination threshold varies. Moreover, the computation time of the model is calculated to evaluate its efficiency.