Explore chapters and articles related to this topic
Moral Robots
Published in L. Syd M Johnson, Karen S. Rommelfanger, The Routledge Handbook of Neuroethics, 2017
Matthias Scheutz, Bertram F. Malle
Research and development of mechanisms for ensuring normative behavior in autonomous robots has just begun, but it is poised to expand, judging from the increasing number of workshops and special sessions devoted to robot ethics and related topics (Malle, 2015). The prospects of autonomous weapon systems have fueled discussion and spurred the development of systems capable of making ethically licensed decisions, but other morally charged applications (e.g., robots for elder care or robots for sex) have come into focus and are likely to contribute to a broadening of the discussion and the efforts to design robots with moral capacities.
Ethical and Social Implications of the Use of Robots in Rehabilitation Practice
Published in Pedro Encarnação, Albert M. Cook, Robotic Assistive Technologies, 2017
Liliana Alvarez, Albert M. Cook
Derived from the ethical tensions that uniquely result from the transition of robots into more human environments and the increasing interaction of humans with robotic systems, Riek and Howard (2014) proposed a code of ethics for the HRI profession (p. 6). The principles of this code of ethics (outlined in Box 10.4) provide an opportunity for rehabilitation professionals to reflect on the many ethical considerations of incorporating robots in their practice. In fact, Riek and Howard’s code is meant to broaden the scope of robot ethics beyond the research and product development arena and to involve practitioners. Thus, readers are encouraged to carefully review the proposed code and consider that the ethical tensions it reflects will expand as robot autonomy increases. But, such ethical tensions are not meant to scare or shy practitioners away from considering the use and application of robots in their rehabilitation practice. Instead, we argue that the entirety of this chapter is meant to promote continued critical reflection on the implications of the use of robots given the evidence-based benefits that are thoroughly outlined in each chapter of this book.
The “humane in the loop”: Inclusive research design and policy approaches to foster capacity building assistive technologies in the COVID-19 era
Published in Assistive Technology, 2022
John Bricout, Julienne Greer, Noelle Fields, Ling Xu, Priscila Tamplain, Kris Doelling, Bonita Sharma
In the Regulated scenario, robots and AI are at least partially accountable for their actions as semi-autonomous actors. The argument is that because these technologies are capable of modulating peoples’ behavior, AI, and robots are partially responsible for the consequences (Hakli & Makela, 2019; Kim & Kim, 2013). However, for those who contend that agency is an all-or-nothing proposition, hinging on complete autonomy, the notion of partial responsibility is in dispute (Hakli & Makela, 2019). For the semi-autonomous robot or AI, government regulation can provide recourse for badly behaving robots, such as the European framework for regulating robotics, “Robolaw,” employing a case study approach, weighing, among other things, questions of liability when humans or their interests are harmed (Palmerini, et al., 2016). The South Korean government’s Robot Ethics Charter offers another regulatory framework for establishing parameters around safe robot operation (Kim & Kim, 2013).
Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents
Published in AJOB Neuroscience, 2020
Travis N. Rieder, Brian Hutler, Debra J. H. Mathews
The central question of this paper is what ethical constraints ought to be placed on a class of emerging technologies defined by the property of “semi-autonomy.” The machines and programs within this class are able to make some limited range of decisions (or perform certain actions) without human oversight, and this ceding of control raises unique ethical issues (Bekey 2012). Although the questions we address are sometimes raised under the headings of “robot ethics,” “AI ethics,” or “machine ethics,” it is important to note that issues we explore do not depend on the technology having a robotic or mechanical body, nor a certain level of intelligence. Sophisticated software programs and algorithms raise unique ethical issues if they can “choose” to, say, exclude a job candidate from consideration or (more dramatically) shut down power to half the United States.
Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people
Published in Assistive Technology, 2021
Aspen Lillywhite, Gregor Wolbring
Discussions around ethics of AI/ML have taken place for some time outside of academic literature (Asilomar and AI conference participants, 2017; European Group on Ethics in Science and New Technologies, 2018; Floridi et al., 2018; IEEE, 2018; Participants in the Forum on the Socially Responsible Development of AI, 2017; Partnership on AI, 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2018) and in academic literature under the header of “machine ethics” (Köse, 2018) and “AI ethics” (Burton et al., 2017), and in relation to individual applications of AI/ML such as robotics (roboethics) and brain computer interface use (Nijboer, Clausen, Allison, & Haselager, 2013; Sullivan & Illes, 2018).