Explore chapters and articles related to this topic
Ethical and Social Implications of the Use of Robots in Rehabilitation Practice
Published in Pedro Encarnação, Albert M. Cook, Robotic Assistive Technologies, 2017
Liliana Alvarez, Albert M. Cook
In a famous science fiction novel of 1942, Runaround, Russian American writer Isaac Asimov articulated the “Three Laws of Robotics,” listed in Box 10.3 (Asimov 1950, 26). Although Asimov used such rules to expand on the richness of his fictional work, the rules gained wide adoption and are considered a paramount aspect in the field of robotics and machine ethics (Anderson 2008). Asimov’s Laws adhere specifically to the principle of nonmaleficence. However, such distinctions are not always straightforward when it comes to rehabilitation and health care. For example, when a robot is programmed to remind a person to take his or her medication, the most appropriate action if the person refuses can be difficult to discern. Allowing a person to miss a dose might be harmful for the person, but insisting or forcing the action violates the person’s autonomy (Deng 2015). Programming the robot to negotiate such circumstances can be challenging. An answer to this and other similar ethical tensions in robotics has been partially addressed by exploring machine learning and the ways in which robots can learn.
Where Bioethics Meets Machine Ethics
Published in The American Journal of Bioethics, 2020
Char et al. (2020) question the extent and degree to which machine learning applications should be treated as exceptional by ethicists. It is clear that of the suite of ethical issues raised by machine learning applications, many are familiar from other settings. The framework for identifying these issues offered by Char et al., alongside numerous others (Jobin et al. 2019), can be useful in mapping them out. There is at least one clear way in which machine learning is exceptional, and that is the degree of formalism—of codification—demanded of the moral frameworks employed in the development of applications. This topic has spawned the sub-field of machine ethics, part of the broader AI Ethics (Anderson and Anderson 2011). Through getting into some of the technical weeds, machine ethics exposes a set of complex ethical decisions that application developers face. Insights from machine ethics can thus act as an input to the work of bioethicists tasked with thinking through machine learning applications. More generally, the codification of moral principles necessary for machine learning applications highlights the fuzziness of those principles. A fruitful interface for machine ethics and bioethics may be the development of context-dependent processes for aligning on finer-grained definitions of these principles.
Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents
Published in AJOB Neuroscience, 2020
Travis N. Rieder, Brian Hutler, Debra J. H. Mathews
The central question of this paper is what ethical constraints ought to be placed on a class of emerging technologies defined by the property of “semi-autonomy.” The machines and programs within this class are able to make some limited range of decisions (or perform certain actions) without human oversight, and this ceding of control raises unique ethical issues (Bekey 2012). Although the questions we address are sometimes raised under the headings of “robot ethics,” “AI ethics,” or “machine ethics,” it is important to note that issues we explore do not depend on the technology having a robotic or mechanical body, nor a certain level of intelligence. Sophisticated software programs and algorithms raise unique ethical issues if they can “choose” to, say, exclude a job candidate from consideration or (more dramatically) shut down power to half the United States.
Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people
Published in Assistive Technology, 2021
Aspen Lillywhite, Gregor Wolbring
Discussions around ethics of AI/ML have taken place for some time outside of academic literature (Asilomar and AI conference participants, 2017; European Group on Ethics in Science and New Technologies, 2018; Floridi et al., 2018; IEEE, 2018; Participants in the Forum on the Socially Responsible Development of AI, 2017; Partnership on AI, 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2018) and in academic literature under the header of “machine ethics” (Köse, 2018) and “AI ethics” (Burton et al., 2017), and in relation to individual applications of AI/ML such as robotics (roboethics) and brain computer interface use (Nijboer, Clausen, Allison, & Haselager, 2013; Sullivan & Illes, 2018).