Explore chapters and articles related to this topic
Ethical rules
Published in Vahap Tecim, Sezer Bozkus Kahyaoglu, Artificial Intelligence Perspective for Smart Cities, 2023
It can be challenging to conclude the right-wrong dilemma since the smart technologies applied in the world are the same, but different cultures make a difference in moral values. With the rapid development of AI, ethical issues are becoming increasingly complex beyond consensus (Russell and Norvig, 2010). The new industrial revolution is expected to bring an egalitarian perspective to human values and focus on giving morality to machines. To achieve this, machine values are expected to be compatible with human goals. In this context, various suggestions can be made. It is possible to explain analytical approaches to ethical considerations as follows:Creating policies and regulatory frameworks by engaging multiple stakeholdersInstead of benefiting a small number of individuals, AI should be structured with inclusive legislation and regulations for the benefit of all humankind. To achieve this, initiatives were accelerated at the UNESCO General Conference held on November 24, 2021. The “Recommendation on the Ethics of Artificial Intelligence” was adopted with the partnership of many stakeholders to set global standards (UNESCO, 2021).
Current State of AI Ethics
Published in Catriona Campbell, AI by Design, 2022
At its most simple, ethics is about right and wrong. Creating Ethics for Artificial Intelligence (AI) is how we programme right or wrong into AI so it can make human-like decisions. Before that, we need to put ethics into a recognisable format we can all abide by – usually ending up in rules. People get rules. Rules provide order because people can read and understand what is OK to do and what is not. Rules dominate our everyday lives; how we drive, worship, play sport and interact with one another. The first set of rules for AI was written in 1942 by sci-fi writer Isaac Asimov. The “Three Laws of Robotics” were developed in his story Runaround to protect humans against harm at the hands of robots: A robot may not injure a human being or, through inaction, allow a human being to come to harm.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.Asimov later introduced a fourth or zeroth law that outranked the others:A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Overview of AI in Education
Published in Prathamesh Churi, Shubham Joshi, Mohamed Elhoseny, Amina Omrane, Artificial Intelligence in Higher Education, 2023
Archana Bhise, Ami Munshi, Anjana Rodrigues, Vidya Sawant
Developing and maintaining ethical and professional AI standards for technology creators contributing to the design, development, deployment, management and maintenance of AI products and services is the need of the hour. UNESCO has, for the first time, decided to develop a legal, global document on the ethics of AI (Elaboration of a Recommendation on the ethics of Artificial Intelligence, 2021).
Moral and social ramifications of autonomous vehicles: a qualitative study of the perceptions of professional drivers
Published in Behaviour & Information Technology, 2023
Veljko Dubljević, Sean Douglas, Jovan Milojevich, Nirav Ajmeri, William A. Bauer, George List, Munindar P. Singh
As these rapid changes in AV technology occur, urgent moral questions press researchers and policymakers alike to develop a comprehensive approach to the study of ethics in artificial intelligence (AI) systems, including AVs (Taddeo and Floridi 2018; Winfield and Jirotka 2018). AI systems—including AVs, medical bots, and automated trading systems—shape socioeconomic structures and affect the lives of many citizens (Ford 2015; Frank et al. 2019; Lyons et al. 2021). These AIs influence public safety, particularly with AVs and automated mass transportation systems, as illustrated by the automation problems created by the Boeing 737 Max aeroplane that caused crashes in 2018 and 2019 (Gelles 2019). Although AI systems have the potential to save lives, they also raise important new safety and ethical concerns, including the way AI systems deal with (human) autonomy and dignity, justice and equity, and data protection and privacy, among other issues (EGE 2018).
Responsible innovation ecosystem governance: socio-technical integration research for systems-level capacity building
Published in Journal of Responsible Innovation, 2023
Mareike Smolka, Stefan Böschen
In response to these shortcomings, scholars introduced the ecosystem concept. Braun and Könninger (2018) identify the eco-systemic approach as one among several others that aim to account for the complexities of public participation. They consider Chilvers, Pallett, and Hargreaves' (2018) ‘ecologies of participation’ as one such approach. The term ‘ecology’ emphasizes the interdependencies between practices, technologies, and political settings of participation. Stahl (2021) suggests that the ethics of Artificial Intelligence (AI) could be understood from an ecosystem perspective to derive recommendations for handling ethical dilemmas. Using the ethics of AI as a case study in a later article (Stahl 2022), he proposes that literature on innovation ecosystems in management and organization studies, on the one hand, and R(R)I discourses and practices, on the other, could mutually inform each other. R(R)I could shift innovation ecosystem management aiming at economic success toward responsible governance integrating ethical and social awareness into the system. In turn, innovation ecosystem literature could help R(R)I scholars consider the ‘systems character of research and innovation and how to deal with it’ (9). In the light of the abovementioned methodological limitations of existing R(R)I engagement studies, Stahl calls for the development of governance approaches to create ‘responsible innovation ecosystems’ (1).