Explore chapters and articles related to this topic
Thinking to Decide in Clinical and Community Medicine: Subjects, Thinking Tools, and Vehicles for the Best Possible Decisions in Practice and Research 1
Published in Milos Jenicek, How to Think in Medicine, 2018
There is a big difference between an “impression-based” and an “evidence-based” algorithm. A medical algorithm today may be seen also as a graphic presentation of a clinical practice or community medicine activity guideline and be the subject of valid guideline rules.
Profiling in public health
Published in Sridhar Venkatapuram, Alex Broadbent, The Routledge Handbook of Philosophy of Public Health, 2023
Medical algorithms or medical AI profile patients, forming “beliefs” about likely diagnoses and most appropriate treatments.16 This, I suggest, raises questions about the ethical and epistemic permissibility of profiling beliefs about patients when they are formed by artificial as opposed to human agents. A medical algorithm, for example, might be programmed to form beliefs about the probability that a patient (call them “Lee”) has a certain medical condition (e.g. HIV/AIDS) based on the presence or absence or degree to which the patient is represented as having certain features (e.g. positive HIV test, recurrent infections, as well as sex, sexual orientation, racial group, socioeconomic status)—i.e. is a member of certain social groups picked out by those features. Compare this predictive algorithm to the now infamous Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) algorithm in the United States. COMPAS profiled individuals’ risk of criminal recidivism, and was unjustly and inaccurately biased against ethnic minorities, and in particular Black individuals (Larson et al. 2016). The COMPAS algorithm was programmed to form beliefs about the risk an individual would commit crimes again based on the presence or absence or degree to which the individual is represented as having certain features (in this case, racial group membership, among other features). A side by side comparison of medical and criminal predictive algorithms again raises the question of why it is that we seem comfortable, ethically at least, with AI profiling in medical contexts but not in non-medical ones, especially regarding crime and punishment.
As long as you think of it
Published in Roger Neighbour, Jamie Hynes, Iona Heath, The Inner Physician, 2018
Roger Neighbour, Jamie Hynes, Iona Heath
Medical algorithms are popular. Their advocates imply, as if it was something to be proud of, that any and every clinical problem will yield to an onslaught of sequential unambiguous questions. There are some grounds for this optimism. Algorithmic thinking is what underpins the phenomenal success of the computer software industry. Complicated problems can be successfully analysed, and many human activities convincingly simulated, by breaking them down into chains of binary either/or decisions. So why not medicine? Given the enormous advantages that information technology has brought to the infrastructure and organisation of health services, there is every reason, one might think, to encourage computer-based thinking to infiltrate the clinical process. Specifically, there is much to be said in favour of the clinical algorithm or ‘decision tree’ as a basis for diagnosis.The knowledge and evidence base on which contemporary medicine relies continues to grow exponentially. No human doctor, however conscientious, can possibly keep on top of all the advice and recommendations put out by journals, advisory bodies and policy makers, and with which the doctor is expected to comply, on pain of being sued for negligence. Algorithms, which can be swiftly updated and disseminated, look like a lifebelt tossed to the doctor overwhelmed by a torrent of information. Through their use, every clinician - and therefore every patient - stands to benefit from the latest research and the best of expert opinion.As doctors are busier than ever and their time more expensive, an algorithmic ‘step by prescribed step’ approach makes it possible for medical decision-making to be delegated in comparative safety to other health professionals who are quicker to train and cheaper to employ, such as nurses, paramedics, physician assistants and call centre staff.Algorithms, which reliably highlight ‘red flag’ warning signs and key discriminating features in a clinical presentation, are educationally useful for doctors in training, and provide a safety net against misdiagnosis for inexperienced clinicians.
A medical algorithm for Cotard delusion based on more than 300 literature cases
Published in International Journal of Psychiatry in Clinical Practice, 2021
Rosa A. S. Couto, Luís Moreira Gonçalves
This work has advanced the knowledge about CD, mainly by systematising its diagnostic and therapeutic approaches, proposing a medical algorithm intended to be helpful in clinical practice. Until now this entity was managed based on scattered bibliography, mainly consisting of case reports, leading to possible subdiagnosis and difficulties in deciding the best clinical guidance one could offer the patient. Although not being recognised as a diagnostic entity in our current classification systems, medical doctors should be aware of this uncommon condition to prompt the recognition and specific medical approach according to the best evidence.
Swedish Inflammatory Bowel Disease Register (SWIBREG) – a nationwide quality register
Published in Scandinavian Journal of Gastroenterology, 2019
Jonas F. Ludvigsson, Marie Andersson, Jonas Bengtsson, Michael Eberhardson, Ulrika L. Fagerberg, Olof Grip, Jonas Halfvarson, Henrik Hjortswang, Susanna Jäghult, Pontus Karling, Caroline Nordenvall, Ola Olén, Malin Olsson, Martin Rejler, Hans Strid, Pär Myrelid
For these reasons, several Swedish physicians with a special interest in IBD started the Swedish QR for IBD (SWIBREG) in 2005. Since then, it has also become clear that access to detailed digital patient data coupled with the parallel growth of computing-power and use of medical algorithm, lends itself to the creation of medical decision systems to the benefit of patients.
‘Other patients become a secondary priority:’ perceptions of Estonian frontline healthcare professionals on the influence of COVID-19 on health (in)equality and ethical decision-making
Published in Journal of Communication in Healthcare, 2022
Kadi Lubi, Kadri Simm, Kaja Lempu, Jay Zameska, Angela Eensalu-Lind
Although institutional and procedural changes were performed by following approved principles [30], the respondents noticed changes in patient management in terms of patient-centredness and medical algorithms (i.e. triage, examination, diagnosing) as well as social repercussions (i.e. isolation).