Explore chapters and articles related to this topic
Artificial Intelligence Six Cognitive Driven Algorithms
Published in Rodgers Waymond, Artificial Intelligence in a Throughput Model, 2020
Base rate proposes how often a feature (or event) occurs in the population. That is, the possibility of drawing such an item is equal to the number of such items divided by the total number of factors in the population. So convincing is representativeness and its accompanying stereotyping that we habitually ignore base-rate information and create our decision choices on representativeness. This is recognized as the base-rate fallacy. For example, the following personality summary of an imaginary person named Steve, which came out in a study by Kahneman and Tversky (1973, p. 241): “Steve is very shy and withdrawn, invariably helpful, but with not much interest in individuals or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.”
Security: Basics and Security Analytics
Published in Rakesh M. Verma, David J. Marchette, Cybersecurity Analytics, 2019
Rakesh M. Verma, David J. Marchette
The base-rate fallacy is related to the problem of unbalanced data sets and refers to the problem of applying the aggregate accuracies of classifiers to the probability that a single data instance is good or bad. An example illustrates the point well. Suppose that the likelihood of a phishing email has been determined through some data collection methods and it happens to be 10%. Let us say that a phishing detector from the academic literature is 90% accurate. Now suppose the detector says that a specific email e is a phishing email. Does that mean that the probability of e being a phishing email is 90%? It turns out that if we apply Bayes’ rule (discussed in Chapter 4 below) to calculate this probability, it is only 50%, i.e., the same probability that we would get by tossing a fair coin. This example shows what we are up against.
Automation and Human Performance in Aviation
Published in Pamela S. Tsang, Michael A. Vidulich, Principles and Practice of Aviation Psychology, 2002
Raja Parasuraman, Evan A. Byrne
Standard procedures from signal detection theory (SDT) are available for setting the decision criterion to achieve a balance between false alarms and misses (Swets & Pickett, 1982). This procedure has been adapted for examining alerting thresholds for TCAS (Kuchar, 1996). However, adjusting the decision criterion for a low device false alarm rate may be insufficient by itself for ensuring high alarm reliability. In an extension of SDT with Bayes’ theorem, Parasuraman et al.(1997) carried out a computational analysis of the automated alarm problem. They showed that despite the availability of the most advanced sensor technology and the development of very sensitive detection algorithms, the low a priori probability or base rate of most hazardous events may limit the effectiveness of many alerting systems. If the base rate is low, as it often is for many real events, then the posterior probability of a true alarm—the probability that given an alarm a hazardous condition exists—can be quite low even for very sensitive warning systems. As a result, operators may not trust or use the system or may attempt to disable it (Satchell, 1993; Sorkin, 1988). Even when operators do attend to the alarm, a low posterior probability may elicit a very slow response (Getty, Swets, Pickett, & Gounthier, 1995).
A Review of Alarm System Design for Advanced Control Rooms of Nuclear Power Plants
Published in International Journal of Human–Computer Interaction, 2018
As the SDT parameters fail to take into account the prior probabilities of the two states, they (d’ and β) may not be the most relevant parameters for describing a system with a human operator (Meyer & Bitan, 2002). For example, the prior probability (base rate) of a dangerous condition is usually low, then the posterior odds of a true alarm can be quite low even for very sensitive alarm systems with very high hit rates and low false alarm rates (Parasuraman, Hancock, & Olofinboba, 1997). Woods (in press) described a system with human operator as a two-stage monitoring system, with the first stage being detectors, and the second stage being human monitor who shifted focus depending on the pattern of outputs from the first stage. Therefore, Getty et al. (1995) proposed a more informative way to evaluate a system with human operator, which is the predictive value of alarms. The positive predictive value (PPV) is the percentage of hits out of all positive responses, and the negative predictive value (NPV) is the percentage of correct rejections out of all negative responses (Meyer & Bitan, 2002). Theoretically, an alarm system with higher PPV and higher NPV is better.
Mitigating cognitive bias with clinical decision support systems: an experimental study
Published in Journal of Decision Systems, 2023
Alisa Küper, Georg Lodde, Elisabeth Livingstone, Dirk Schadendorf, Nicole Krämer
The suggested mitigation strategy is gathering of objective information and making an estimation on the true base rate of a diagnosis. According to Wang et al., (2019) support systems might achieve this by showing prior probability of a diagnosis to help seek the base rate. Physicians argue that they would like to see prevalence data, to help them rule out unfit diagnosis (Wang et al., 2019). Therefore, we want to investigate whether presenting several differential diagnoses with additional information about their base rate probability can help mitigate a previously triggered availability bias, to see if the integration of such information in a support system would be helpful in supporting less biased decision-making.
Let the Evidence Speak—Using Bayesian Thinking in Law, Medicine, Ecology and Other Areas
Published in Technometrics, 2020
The formal presentation appears only in the appendix of the last two pages: the likelihood, or probability p(E|A) of evidence E given alternative A, the base rate, or prior probability of the alternative p(A), the base rate of evidence p(E), and the belief, or posterior probability p(A|E) (errata: definition for belief in p. 226 should be read as probability of alternative given evidence), and eventually the Bayes formula is presented as p(A|E)=p(E|A)p(A)/p(E) in the last rows of the book.