Explore chapters and articles related to this topic
Deep Learning for Retinal Analysis
Published in Ervin Sejdić, Tiago H. Falk, Signal Processing and Machine Learning for Biomedical Big Data, 2018
Henry A. Leopold, John S. Zelek, Vasudevan Lakshminarayanan
The classification outcomes are used to derive KPIs that gauge system performance, the most common of which are mathematically defined in Table 17.2. Sensitivity (SN) is the proportion of true positive results detected by the classifier; sometimes SN is referred to as the true positive fraction or a classifier’s recall. Specificity (SP) is the proportion of negative samples properly classified by the system. SN and SP are two of the most important KPIs to consider when developing a classification system as they are both representations of the “truth condition” and are thereby a far better performance measure than Acc. In an ideal system, both SN and SP will be 100%; however, this is rarely the case. Systems must make a trade-off between false positives and false negatives; in health care, this is a very tricky question: which type of false alarm is better? False positives ensure fewer cases are missed but incur additional resource allocations and associated expenses, whereas false negatives place less burden on the system but result in more missed cases or misdiagnoses.
Enhanced Fish Detection in Underwater Video Using Wavelet-Based Color Correction and Machine Learning
Published in Monika Mangla, Subhash K. Shinde, Vaishali Mehta, Nonita Sharma, Sachi Nandan Mohanty, Handbook of Research on Machine Learning, 2022
Jitendra P. Sonawane, Mukesh D. Patil, Gajanan K. Birajdar
Accuracy is the most commonly used metric to identify or to present whether a model is working correctly or not and is not a clear indicator of the performance. The results degrade when classes are imbalanced. Precision is a parameter that represents how accurate the model predicts true positives. False positives are the results in which the cases are considered incorrectly as positive, but in actuality, they are negative. While recall represents the ability to find all useful occurrences in a dataset, precision expresses the segment of the data in the model says was relevant and is relevant.
Values in Risk and Safety Assessment
Published in Diane P. Michelfelder, Neelke Doorn, The Routledge Handbook of the Philosophy of Engineering, 2020
This defence relies on an asymmetry between two types of error: false positives (type I error) and false negatives (type II error). A false positive is the error of inferring that there is an effect—typically, accepting a hypothesis—when in fact there is no effect, whereas a false negative is the error of inferring that there is no effect when in fact there is. For example, if our hypothesis is that a certain alloy may rust when exposed to oxygen and moisture, wrongly accepting the hypothesis would constitute a false positive, whereas wrongly rejecting the hypothesis constitutes a false negative.
Upcoming mandatory testing requirements for chromium plating facilities
Published in Transactions of the IMF, 2020
The Californian chrome plating facilities are being required to test for PFAS even if there is no evidence of historical contamination at the property from any chemicals. Current testing is requiring the analysis of 25 different kinds of PFAS, including PFOS and 6:2 FTS. Because such low concentrations of PFAS are considered to be toxic and their prevalence in common consumer products and tools, false positive detections are common during the investigations for PFAS. False positives detections can lead to unnecessary expense and additional investigations. Therefore, selecting a knowledgeable, skilled, and experienced environmental consulting firm, is paramount to keeping the investigation as low cost as possible. The author is a Senior Professional Geologist at SCS Engineers specialising in emerging contaminants.
A new method for estimation of critical speed for railway tracks on soft ground
Published in International Journal of Rail Transportation, 2018
Karin Norén-Cosgriff, Eric Gustav Berggren, Amir Massoud Kaynia, Niels Norman Dam, Niels Mortensen
Outcomes (1)–(3) are acceptable for an inventory method, also if the amount of false positive results has to be limited for the method to be successful. False positive results will require some extra evaluation, either by looking deeper into measurement data to find out if something is wrong, or by performing site investigation. However, false negative results is more severe since problem areas may be missed. One reason for false negative is too low measurement speed. Hence, it is important to point out that even though the measurements do not indicate any critical speed problems, the results can only be trusted for critical speeds up to about twice the measurement speed.
Multi-Regional landslide detection using combined unsupervised and supervised machine learning
Published in Geomatics, Natural Hazards and Risk, 2021
Faraz S. Tehrani, Giorgio Santinelli, Meylin Herrera Herrera
Although we achieved promising results in this study, more landslide and non-landslide segments are needed in order to increase the performance and reliability of the landslide detection model. This can be regarded as the major limitation of our study, which can be alleviated in future by adding more data to the landslide database that we created. Another limitation that we see in our approach and similar works is using multi-spectral data as the main data source for landslide detection. Two major drawbacks are observed in this regard:It is not always possible to position timely post-landslide imagery (within few hours to few days) given that most landslides are triggered by rainfall and therefore cloud contamination of immediate post-landslide images can be inevitable. Using cloud removal algorithms may not help either, because the landslide segments can be missed using those algorithms. A remedy for this limitation can be microwave remote sensing data (e.g. Sentinel-1) which is far less affected by atmospheric conditions compared to multi-spectral data.Using multi-spectral data, it is likely that non-vegetated segments are classified as landslide segments which increase the number of false positives. This can be corrected by manual intervention, but it can be a cumbersome task especially if there are many false positives. Here, microwave remote sensing data and techniques such as Interferometric Synthetic Aperture Radar (InSAR) can be helpful for using change in elevation as another feature to detect landslides and distinguishing them from non-landslide segments.