Explore chapters and articles related to this topic
Epistemic paternalism
Published in Kalle Grill, Jason Hanna, The Routledge Handbook of the Philosophy of Paternalism, 2018
Of course, any concern about bias would be greatly diminished were it simply a matter of being more careful and vigilant in our thinking. One way to flesh out this thought is in terms of what we may refer to as the self-correction strategy, on which the individual agent corrects for bias on her own accord. However, there are two problems for this strategy. The first problem is one of motivation, arising out of the fact that any attempt to deal with bias has to take into account not only that we are biased, but also that we suffer from what Emily Pronin and colleagues (2002) have referred to as a “bias blind spot,” on account of which we tend to underestimate the extent to which we are prone to bias. This blind spot should be understood in the context of the well-known psychological fact that, depressed people aside (Taylor and Brown 1988), we tend to rate ourselves as above average on desirable traits (Alicke 1985; Brown 1986). This overconfidence extends to our evaluations of our own epistemic capabilities. As Pronin (2007: 37) notes in an overview, “people tend to recognize (and even overestimate) the operation of bias in human judgment – except when that bias is their own.”
Cognitive and Affective Biases, and Logical Failures
Published in Pat Croskerry, Karen S. Cosby, Mark L. Graber, Hardeep Singh, Diagnosis, 2017
Blind Spot Bias: even though many people can detect bias in the decision making of others, they may not be so vigilant or effective in detecting it in themselves. This metabias, which shows considerable individual variation [62], was originally described by Pronin et al. [63] as the bias blind spot. Our assessments of ourselves are based largely on our thoughts and feelings, whereas our assessments of others are based on their external observable behavior [64]. We have a strong tendency to believe that our own perceptions reflect true reality (naïve realism), while those of others are seen to be biased by self-interest, personal allegiances, an emphasis on dispositional rather than situational explanations (fundamental attribution error), and other factors. In the process, we are less likely to follow the advice of others [65], although it does not appear to make one any less competent at decision making overall [62]. An important question is whether or not vulnerability to biases makes it more or less likely to exhibit blind spot bias. Testing people on six common cognitive biases, West et al. [66] found that cognitive sophistication did not reduce vulnerability to blind spot bias; however, Scopoletti et al. [62] did report that a higher susceptibility to bias blind spot appeared to be a barrier to bias mitigation training for the robust bias fundamental attribution error. An important aspect of bias blind spot is that those who are more susceptible to it are less likely to engage in strategies to improve their decision making through taking the advice of others or by corrective training, and may be resistant to bias mitigation training [62].
Forensic Mental Health Practitioners’ Use of Structured Risk Assessment Instruments, Views about Bias in Risk Evaluations, and Strategies to Counteract It
Published in International Journal of Forensic Mental Health, 2022
Jennifer Kamorowski, Corine de Ruiter, Maartje Schreuder, Karl Ask, Marko Jelícic
A number of recently published surveys suggest that forensic mental health practitioners are aware of, and concerned about, the potential for bias in forensic mental health evaluations (Kukucka et al., 2017; Neal & Brodsky, 2016; Neal & Grisso, 2014b; Zapf et al., 2018). However, many remain skeptical about bias affecting their own work as evidenced by a bias blind spot (Pronin et al., 2002), that is, the belief that they are less prone to bias than their colleagues (Boccaccini et al., 2017; Kukucka et al., 2017; Neal & Brodsky, 2016; Zapf et al., 2018; Zappala et al., 2018). For example, Zapf and colleagues (Zapf et al., 2018) surveyed 1,099 mental health practitioners who conduct forensic evaluations and just over half (52.2%) agreed that their own judgments can be influenced by cognitive bias (Zapf et al., 2018).
Is Hindsight Really 20/20?: The Impact of Outcome Information on the Decision-Making Process
Published in International Journal of Forensic Mental Health, 2018
Amanda Beltrani, Amanda L. Reed, Patricia A. Zapf, Randy K. Otto
Similarly, Zappala, Reed, Beltrani, Zapf, and Otto (2018) surveyed a sample of 80 forensic mental health professionals to assess for a bias blind spot. Participants completed a survey inquiring about four biases (illusory correlation, hindsight bias, fundamental attribution error, and confirmation bias) that occur in forensic evaluation. Using a 9-point Likert-type scale half the participants were requested to rate their own susceptibility to each bias whereas the other half were requested to rate their peers’ susceptibility to the same bias. These researchers found that a bias blind spot related to biases in general, and hindsight bias in particular, was evident in their sample of forensic evaluators.
Current Practices in Incorporating Culture into Forensic Mental Health Assessment: A Survey of Practitioners
Published in International Journal of Forensic Mental Health, 2022
Amanda M. Fanniff, Taylor M. York, Alexandra L. Montena, Kenzie Bohnsack
Published recommendations emphasize the importance of considering competence prior to accepting referrals, seeking out cultural consultation, and educating yourself on relevant cultural considerations. Evaluators reported engaging in a wide range of methods to enhance their culturally-competent forensic practice; however, they rarely referred evaluations to other professionals based on the cultural identities of the examinee. There could be a variety of reasons for this. Given that the respondents reported engaging in a variety of strategies to enhance their cultural competence, they may seek out literature and consultation regarding examinee identities rather than refer out. Additionally, some evaluators may work in communities in which there are few professionals to whom they could refer cases. It is also possible that evaluators don’t recognize when referring an evaluation to another professional is more appropriate than taking a referral. Evaluators generally believed that they were aware of their own biases, aware of the impact of those biases on the evaluation, and able to overcome those biases to conduct objective evaluations. In contrast, respondents expressed less confidence in other evaluators’ ability to overcome their biases. The bias blind spot, or the tendency to recognize common cognitive and motivational biases and errors in others while believing oneself to be less subject to these biases, is well documented in general (e.g., Pronin et al., 2002) and among forensic psychologists (e.g., Neal & Brodsky, 2016). There are empirically-supported strategies to reduce biases; however, evaluators often rely upon introspection, which is ineffective and may actually contribute to the bias blind spot (e.g., MacLean et al., 2019; Neal & Brodsky, 2016; Pronin & Kugler, 2007). More effective strategies for detecting one’s biases, such as tracking one’s decisions (e.g., Gowensmith & McCallum, 2019), may facilitate better recognition of when to refer cases to another professional. Consultation with peers may also facilitate more effective recognition of biases and may facilitate culturally-informed practice when referral is not necessary or possible.