Explore chapters and articles related to this topic
Introduction
Published in Wendell Wallach, Peter Asaro, Machine Ethics and Robot Ethics, 2020
This volume is a collection of, and introduction to, scholarly work that focuses upon robots and ethics. The topic is divided into two broad fields of research. Robot ethics, or roboethics, explores how people should design, deploy, and treat robots (Veruggio & Operto 2006). It is particularly interested in how the introduction of robots will change human social interactions and what human social concerns tell us about how robots should be designed. Machine ethics or machine morality considers the prospects for creating computers and robots capable of making explicit moral decisions (Allen, Varner, & Zinser 2000; Moor 2006; Anderson & Anderson 2007). What capabilities will increasingly autonomous robots require 1) to recognize when they are in ethically significant situations, and 2) to factor human ethical concerns into selecting safe, appropriate, and moral courses of action? There is no firm distinction between robot ethics and machine ethics, and some scholars treat machine ethics as a subset of robot ethics. Many of the scholars represented feel their work draws upon and contributes to both fields.
Digital Ethics
Published in David Burden, Maggi Savin-Baden, Virtual Humans, 2019
David Burden, Maggi Savin-Baden
Robot ethics features such topics as ethical design and implementation as well as the considerations of robot rights. Hern (2017) reported that The European Parliament has urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence. The areas that need to be addressed are suggested to be: The creation of a European agency for robotics and AI;A legal definition of ‘smart autonomous robots’, with a registration for the most advanced;An advisory code of conduct to guide the ethical design, production and use of robots;A new reporting structure for companies requiring them to report the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions; andA mandatory insurance scheme for companies to cover damage caused by their robots.
Dignity and broader impacts
Published in Eduard Fosch-Villaronga, Robots, Healthcare, and the Law, 2019
In December 2017, I attended the Conference, “Scientific Aspects of Development and Implementation of Emotionally Intelligent Human-Inspired Robots: Enthusiasm and Scepticism,” organized by Prof. Dr. Aleksandar Rodić from the Mihajlo Pupin Institute in Belgrade. After my talk, I received this email: Good Morning Eduard,Thanks for your email […] It is a pleasure to meet you and know about the field you are working in. It is really interesting to hear about the ethical, legal and safety issues regarding robotics. To be honest, I never thought about it. But your talk in which you highlighted “Who implements emotions?” is really a point which interests me a lot. Since I am implementing algorithms and emotions for the robot for specific situations, there is definitely a question there: whether am I the right person to decide “How robots should behave” or should there be an expert who tells me about this information. I found this out on relatively smaller scale that emotions that I code/implement on the robot sometimes doesn’t appear natural to some other subjects. The reason is I am implementing them based on my view or opinion about situations which more often than not differs from others. For me it is quite informative and somehow opens a new perspective about my work […].I was so surprised that I even screenshotted and uploaded on Twitter.3 This is just one example of the problem of the persons who work behind the creation of robot and artificial intelligent systems, in that most of the time, they work in isolation and with little awareness of the implications that their work might have. As Asaro (2006) explains, roboethics is compound of the ethics of the people that interact with robots, the ethical systems of people who design robots, and the ethical systems built into robots. When we create a system and when we design the emotions that the robot will exhibit, we should think about how these sides will intertwine. The task performance of the robot should respect the ethics and norms governing all the relationships arising from the HRI.
Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people
Published in Assistive Technology, 2021
Aspen Lillywhite, Gregor Wolbring
Discussions around ethics of AI/ML have taken place for some time outside of academic literature (Asilomar and AI conference participants, 2017; European Group on Ethics in Science and New Technologies, 2018; Floridi et al., 2018; IEEE, 2018; Participants in the Forum on the Socially Responsible Development of AI, 2017; Partnership on AI, 2018; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, 2018) and in academic literature under the header of “machine ethics” (Köse, 2018) and “AI ethics” (Burton et al., 2017), and in relation to individual applications of AI/ML such as robotics (roboethics) and brain computer interface use (Nijboer, Clausen, Allison, & Haselager, 2013; Sullivan & Illes, 2018).