Explore chapters and articles related to this topic
Diagnostic Support from Information Technology
Published in Pat Croskerry, Karen S. Cosby, Mark L. Graber, Hardeep Singh, Diagnosis, 2017
Strange things happen as clinicians seek to be thorough but efficient. Daily notes are filled with “cut-and-paste” features [37] that fill the note with redundant and irrelevant facts that are no longer current, contributing to lengthy “note bloat” [38]. Irrelevant and inaccurate information is passed along from day to day, and even between admissions. Fitzgerald describes an encounter with a student who reported that her patient had undergone bilateral below-the-knee amputations (BKAs) [39]. Upon examining the patient, Fitzgerald asked the trainee how she accounted for the patient’s two apparently healthy legs. When confronted with the real patient, the student sheepish ly admitted that she had accepted the information in the medical record at face value. The error had apparently crept into the record when “DKA times 2” (meaning diabetic ketoacidosis) was mistaken for “BKA times 2,” a fact thereafter “enshrined” in the medical record––not just for this student, but for several teams of clinicians who preceded her [39]. Perhaps this is an example of automation bias, the tendency to accept something as true, or fail to question it, if the information presents itself in an electronic version that is perceived to be authoritative [40].
Where to Now?
Published in Bill Runciman, Alan Merry, Merrilyn Walton, Safety and Ethics in Healthcare, 2007
Bill Runciman, Alan Merry, Merrilyn Walton
Although it has been shown that clinicians will use on-line information and decision support, and that these can improve their decision-making,27,28,29,30 evidence is emerging that there are major problems with most commercially available information technology systems.31 Many have been developed with little regard for how they will impact on the work flow of clinicians, and many new types of error are emerging (‘automation bias’). An argument has been put that there is an urgent need to regulate software and information technology systems which impact on patient care, as the potential for serious harm is great.32
A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning
Published in The American Journal of Bioethics, 2022
Melissa D McCradden, James A Anderson, Elizabeth A. Stephenson, Erik Drysdale, Lauren Erdman, Anna Goldenberg, Randi Zlotnik Shaul
The basic ethical challenge in the clinical context is that the addition of an ML model to an extant clinical workflow may involve significant departures from the standard of care, with attendant risks. By contrast, quality improvement (QI) is typically undertaken to improve a process or intervention that is already known to be beneficial to patients (Lynn 2004). The relatively minimal oversight of QI is therefore predicated on the assumption that the benefits and risks of the interventions under study are already known and empirical support justifies its use inpatient care (Baily et al. 2006). We now know that ML interventions are subject to automation bias, fairness issues, and other unintended secondary effects (e.g., over-reliance, automation bias; Tschandl et al. 2020) that may exacerbate these risks. There is also the accepted fact that models will inevitably be imperfect, and knowing precisely under what circumstances they ought to be considered authoritative is an empirical question that requires robust evidence, particularly when model outputs may override clinical judgment.
Privacy and ethical challenges in next-generation sequencing
Published in Expert Review of Precision Medicine and Drug Development, 2019
Nicole Martinez-Martin, David Magnus
The ML system itself generates the algorithms, leading to a ‘black box’ issue, in which it is difficult for even the developers themselves to evaluate the specific reasoning behind the outcomes generated[95]. This makes transparency difficult. Furthermore, as industry developers increasingly invest in advancing the use of ML for NGS[96], they are generally reluctant to share information about the workings of their ML systems for proprietary reasons. Recently, there have been calls for the development of ML systems that are also able to explain the reasoning for their findings. ML systems also can be subject to ‘automation bias’, where results or findings that arise from an automated tool are perceived as inherently more objective or accurate than other sources of information and thus lead to the limitations of the ML systems being overlooked. Clinicians will need education regarding the ML systems, the data sets, and limitations, including the potential for bias[10]. Institutions will also need to take these limitations into account when formulating policy in the use of such systems and NGS data, particularly for informed consent and return of results in NGS. Finally, while it is important to be aware of the limitations of ML itself, it is also critical that there be attention paid to the systems and processes into which the AI is being integrated, in order to understand and address the ways that the use of AI for NGS may impact fiduciary relationships. For example, if clinicians rely on AI-generated findings for their use of NGS in ways that substitute the judgment of the software for their own, that can have implications for clinical accountability and the physician–patient relationship that will need to be studied and addressed.
Application of Artificial Intelligence in the Diagnosis and Management of Corneal Diseases
Published in Seminars in Ophthalmology, 2021
Maryam Tahvildari, Rohan Bir Singh, Hajirah N. Saeed
Automation bias in the application of AI systems is the phenomenon where clinicians accept the guidance of an automated system and cease searching for confirmatory evidence.63 It has been shown that extensive dependency on AI systems for decision making is strongest when a machine predicts the case to be normal, leading to a high possibility of missed diagnosis. The decision making in AI systems is not as dynamic as humans, as they are highly dependent on periodic training and updates.64 Increasing new insights and subsequent protocol changes in medicine may require constant updates, reducing the cost-effectiveness of these systems.