Explore chapters and articles related to this topic
A Critical History of Bioethics
Published in Joel Michael Reynolds, Christine Wieseler, The Disability Bioethics Reader, 2022
The third jurisdiction is public policy bioethics which debates the ethics of technology and science affecting humans that can be incorporated into general policies that will be applied to all citizens. A recent example is the large group of recent commissions recommending policy concerning human gene editing. DS scholars occasionally participate in this jurisdiction – for example a DS scholar testified before the recent NAS panel on human gene editing. This activity is not just based on policy commissions, but any writing that ultimately is intending to influence law or policy. Jurisdiction is less settled in this task space, and since the topics in this jurisdiction are much broader than in the previous two, it will presumably be of more interest for DS scholars.
The Culture and Philosophy of Bio-medicine in India
Published in Makarand R. Paranjape, Healing across Boundaries, 2015
An equally disembedded approach is reflected in technology deployment. The assessment of the need for technology — type and level of technology use, cultural alternatives to technology deployment, ethical issues about specific technological interventions, particularly invasive technologies, their cost and use-effectiveness in India — has not been systematically researched. Technology deployment decisions related to end-of-life care have been insensitive to Indian conceptions of life, death and body, and the financial implications on families.7 Debates about technology use need special consideration in the context of women’s health and illness. Assisted conception technologies, excessive use of prenatal screening technologies,8 and the technical and changing nature of childbirth in obstetrical procedures all need to be re-evaluated for their role in Indian culture. There is also a need to review medical technology in the diagnosis of asymptomatic diseases. In other words, cultural sensitivity requires developing a distinctly Indian code of medical ethics on technology use.
Nascent Expectations and Hope
Published in Michael van Manen, The Birth of Ethics, 2020
On the one hand, we can blame medical technologies whereby we recognize they inherently make the calculation of risk possible; it is as if the technologies themselves bear moral responsibility (Verbeek, 2011). On the other hand, such technologies and how they are integrated into clinical practice reflect an ethics of technology application. For example, ultrasound makes it possible to suspect a genetic diagnosis. ‘Soft markers’ such as nonossified nasal bone, linear arrangement of the tricuspid and mitral valves within the heart, thickened nuchal skin fold, relatively short humerus or femur compared to head size, echogenic intracardiac focus, fetal hydronephrosis, and so forth are all associated with aneuploidy (abnormal chromosome number) (Norton, 2013). Similarly, biochemical measures from the expectant mother’s blood or urine, such as free beta human chorionic gonadotropin (FbhCG) and pregnancy-associated plasma protein A (PAPP-A), also have clear associations with genetic conditions (Tørring, 2016). When we combine maternal age, ultrasound findings, and biochemical values, further refinement of probability for aneuploidy can be calculated to determine risk (Alldred et al., 2017). From the determination of risk, either chorionic villus sampling or amniocentesis may be offered for more definitive diagnostic testing, recognizing such tests have risks in themselves: precipitation of labor, needle injury to the fetus, and infection transmission. Alternatively, cell-free DNA detection methods for non-invasive prenatal testing (NIPT) are available such that genetic conditions may be identified with a high degree of certainty through analysis of tiny amounts of fetal DNA circulating in the expectant mother’s blood (such as the above mentioned Harmony Prenatal Test) without the risks of pregnancy loss (Dondorp et al. 2016; Morain et al. 2013). To be clear, lack of invasiveness describes physiologic invasiveness relative to chorionic villus sampling or amniocentesis.
Machine Learning Healthcare Applications (ML-HCAs) Are No Stand-Alone Systems but Part of an Ecosystem – A Broader Ethical and Health Technology Assessment Approach is Needed
Published in The American Journal of Bioethics, 2020
Helene Gerhards, Karsten Weber, Uta Bittner, Heiner Fangerau
We would like to carry this issue further: Char, Abràmoff and Feudtner write that “with growing understanding that mores and values can intentionally or unintentionally become embedded in the design of engineered systems […] transparency will be required regarding any potential conflicts of interest.” The concept of value-ladenness of technology is indeed well established in applied ethics, ethics of technology or Science and Technology Studies. The “potential conflicts of interest” might be part of the value-ladenness of technology and, therefore, of ML-HCAs, but they are by no means the most important problems. Established control instruments exist for such conflicts even if one has to admit that these instruments are not always utilized satisfactorily and cannot always prevent malpractices. However, when, for instance, ethics of technology speaks of value-ladenness, this usually refers to something like equality, fairness or justice or, better to say, to their absence, for example, due to racism, sexism or other discriminatory attitudes toward certain groups of people (cf. Laflamme et al. 2019). Discrimination in the context of ML-HCAs can result from biased training data, as the authors themselves point out, but also from other contextual factors. These factors could be, among others, attitudes and political convictions of stakeholders, social and economic disparities, the structure of the healthcare system and the means of its financing, or the overall level of technological development of a country.
Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly
Published in AJOB Neuroscience, 2020
Cognitive neuroscience has started to take a serious look at some of the roots of ethical behavior, understanding the mechanisms that make us act ethically. Ethicists are working increasingly toward bridging ethics and technology, devising methods to design ethics into our intelligent systems. But what we are missing are the daily life moral fitness apps, based on applications for personalized use of artificial intelligence that help us in our moral self-improvement. We are becoming more intelligent by the day, assisted by autonomous systems in any sort of cognitive task, from memory to communication to coordination and control. Those are the qualities we are proud to enforce with our technology, but only rarely are there moral virtues among those qualities. Yet moral virtue needs the attention of our AI assisted society, probably more than anything else.
Taking a Step Back: The Ethical Significance of DTC Neurotechnology
Published in AJOB Neuroscience, 2019
Verina Wild, Niels Nijsingh, Tereza Hendl
We, members of the ‘META’ team, which explores the ethics of mobile health technologies (mHealth) at the Ludwig-Maximilians-University Munich, have been grappling with the seemingly trivial question of why mHealth calls for a robust ethical interrogation. The strategy that we have taken thus far involves connecting the importance of the subject with the value of the basic good of “health,” which would then fall into the realm of medical ethics or bioethics. While this approach makes sense, we have realized that the justification for ethical analysis must be explained and substantiated more broadly, also beyond other more or less established fields within applied ethics such as philosophy and ethics of technology or business ethics. To this end, we are developing a wider framework for our research.