Explore chapters and articles related to this topic
Ethical rules
Published in Vahap Tecim, Sezer Bozkus Kahyaoglu, Artificial Intelligence Perspective for Smart Cities, 2023
According to the Stanford Encyclopaedia of Philosophy (2020), “Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects, rather than for the human use of machines as objects”. Asimov (1942) states that machine ethics should be based on rules and, therefore, proposed the “three laws of robotics”:First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.Second Law – A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.Third Law – A robot must protect its existence as long as such protection does not conflict with the First or Second Laws.O’grady and O’hare (2012) state that AI-based robots will become the future residents of smart cities. Accordingly, it has been started to discuss how machine ethics will act within the framework of morality. One of the fundamental questions raised in this direction is, “Can a machine behave ethically towards other machines and humans?”, if yes, “What are the moral standards to be applied to judge machine behaviour?”. For example, should an autonomous vehicle sacrifice the person in the car instead of hitting someone in a situation where an accident is inevitable? How will he determine his choice? At this point, the car’s algorithm, instead of the human being, takes place in the decision-maker situation. Currently, the legal grounds do not fully answer this problem. The constant set of values brought by design leads to an ethical dilemma for autonomous vehicles (Schoettle and Sivak, 2014).
Reducing moral ambiguity in partially observed human–robot interactions
Published in Advanced Robotics, 2021
Machine ethics, on the other hand, addresses explicitly the normative aspects of robot behaviour. Various ethical constraints have been proposed and various attempts undertaken to ‘algorithmatise’ normative theories such as consequentialism and deontology [18–22]. However, while this moves the debate forward on what constraints machine systems should be subject to and how these constraints should be encoded, it overlooks how robot behaviour appears to an observer.