Explore chapters and articles related to this topic
Legal Ethical and Policy Implications of Artificial Intelligence
Published in Utpal Chakraborty, Amit Banerjee, Jayanta Kumar Saha, Niloy Sarkar, Chinmay Chakraborty, Artificial Intelligence and the Fourth Industrial Revolution, 2022
The apprehensions regarding human rights violations and the anxiety to protect human rights are gradually increasing with the development of the process of AI. Most of the human rights activists are not against the development of the system of AI, because the development of science and technology is normal and spontaneous and it is not possible to interrupt the above process for a long time, but they advocate developing the whole AI system from a humanistic approach so that the near-human approach or superhuman approach of AI system does not pose a threat to the existence of this—the wisest—creation of God. The Toronto Declaration, 2018, is a similar attempt of a group of human rights activists and technocrats to save humankind and give a specific orientation to uncontrolled technological development so that it may prove beneficial to human society. It called on both public and private actors to ensure that algorithms respect the right to equality and are based on the principle of nondiscrimination.52 The main focus of the Toronto Declaration is on how to make technology humancentric and ensure that the issues related to AI, including its ethical portion, be judged through the human rights lens to assess present and potential future harm to human rights and also to take corrective measures in this regard to mitigate the risks, if any arises.53 The above declaration advocates that under the international human rights laws, the states are bound to protect human rights, which include the rights not to be discriminated against, and keeping the above mantra in mind, the machine learning system should be designed in such a way that people can enjoy the right to life meaningfully, along with their right to privacy, fundamental freedom, etc. The machine learning system should be based on the principles of inclusion, diversity, and equity and must ensure transparency and accountability from the different actors associated with the algorithmic process.
Diversity, Equity, and Inclusion in Artificial Intelligence: An Evaluation of Guidelines
Published in Applied Artificial Intelligence, 2023
Gaelle Cachat-Rosset, Alain Klarsfeld
Most guidelines assert the importance of DEI in AI. As an example, in The Toronto Declaration, the Human Rights Watch (2018, 6), states that “This Declaration underlines that inclusion, diversity and equity are key components of protecting and upholding the right to equality and nondiscrimination. All must be considered in the development and deployment of machine learning systems in order to prevent discrimination, particularly against marginalized groups.” The content analysis allowed for the identification of 14 categories of principles related to DEI (Table 3). In their review of AI ethics principles, Hagendorff (2020) and Jobin, Ienca, and Vayena (2019) revealed that about 80% of their sources specify the notion of fairness. When focusing on DEI principles erected, our results tend to confirm these findings with 76% of sources mentioning “Equity/Fairness,” by far one of the most cited notions, followed by “nondiscrimination” for 57%. These both DEI principles are related to the regulation of DEI in most western countries (Klarsfeld et al. 2014), so DEI principles in AI primarily remind actors of the DEI legal obligations already in place. A greater diversity in datasets used for developing or training AISs should provide a greater representativeness for half of the guidelines, thus delivering a more AI-specific principle and relying on one of the most often cited charges against discriminatory AI (Howard and Borenstein 2018). Echoing the important debate in AI about the accountability of decisions made and supported by AISs on individuals, especially when they can impact their personal life, health, safety or professional life, 46% of sources recommend that humans should always be responsible for final decisions made. Moreover, 41% of guidelines state that AISs should be sources of inclusion and should tend to correct existing inequalities in societies, playing a proactive role of improving the social environment in which the systems are developed or used and not only avoiding the reproduction of inequalities. Also, 41% of guidelines insist on the need for transparency of AI, what is “about efforts to identify, prevent and mitigate against discrimination in AI systems” (Toronto Declaration 2018, 12). The respect of human dignity and social justice (referred, respectively, to by 35% and 33% of guidelines) are also pointed out, referring to a consideration of all individuals without distinction.