Explore chapters and articles related to this topic
Safety Standards and Certification
Published in Chris Hobbs, Embedded Software Development for Safety-Critical Systems, 2019
The problem of understanding exactly what the machine-learning system has learned is known as “model interpretability”. Reference [9] by Zachary LiptonLipton, Zachary C points out that this term is not properly defined and lists several possible interpretations. It also points out that regulations in the European UnionEuropean Union!right of explanation require that individuals affected by algorithmic decisions have a right to an explanation of how the decisions are being made. This adds a legal necessity to understanding what has been learned to the safety necessity.
Introduction
Published in Przemyslaw Biecek, Tomasz Burzykowski, Explanatory Model Analysis, 2021
Przemyslaw Biecek, Tomasz Burzykowski
A reaction to some of these examples and issues are new regulations, like the General Data Protection Regulation (GDPR, 2018). Also, new civic rights are being formulated (Goodman and Flaxman, 2017; Casey et al., 2019; Ruiz, 2018). A noteworthy example is the “Right to Explanation”, i.e., the right to be provided with an explanation for an output of an automated algorithm (Goodman and Flaxman, 2017). To exercise the right, we need new methods for verification, exploration, and explanation of predictive models.
Artificial Intelligence Governance For Businesses
Published in Information Systems Management, 2023
Johannes Schneider, Rene Abraham, Christian Meske, Jan Vom Brocke
Data is the representation of facts using text, numbers, images, sound, or video (DAMA International, 2009). While Abraham et al. (2019) provide an overview of the data scope, we emphasize that the type of data impacts the model selection, e.g., structured or tabular data vs. unstructured data such as text and images. For example, deep learning models work well with large unstructured data but have been relatively less successful for tabular data. An essential characteristic of data is its sensitivity level: Is it personal or non-personal data? Personal data, i.e., data that relates to humans, underlies different regulations, e.g., GDPR enacted in the European Union in 2018. A secondary reason for the separation is that human data often come with different characteristics with respect to quality and costs. These are strongly reflected in ML and system properties. For instance, GDPR grants the right to explanation to individuals for automated decisions based on their data. Such explainability processes are less of a concern for models based on non-personal data. Reddy et al. (2020) set forth a governance model for healthcare based on fairness, transparency, trustworthiness, and accountability. Janssen et al. (2020) emphasized the importance of a trusted data-sharing framework.
Enhancing Fairness Perception – Towards Human-Centred AI and Personalized Explanations Understanding the Factors Influencing Laypeople’s Fairness Perceptions of Algorithmic Decisions
Published in International Journal of Human–Computer Interaction, 2023
Avital Shulner-Tal, Tsvi Kuflik, Doron Kliger
Transparency of algorithmic decisions can be achieved by providing explanations about the outcome of the system and the decision-making process (Abdollahi & Nasraoui, 2018; Arrieta et al., 2020; Felfernig & Gula, 2006; Kim et al., 2015; Ribeiro et al., 2016; Sinha & Swearingen, 2002). The explanations may enable various observers to understand the reasons for a decision made by an ADMS. These may also be requested by regulators in compliance with users’ legal ‘right to explanation’ (users can demand explanations of decisions that were made for them by an algorithmic system) (Goodman & Flaxman, 2017; Zhang et al., 2022). Explanations of ADMSs, in turn, may increase users’ trust in the system and, therefore, encourage them to consider them as fair systems (Arrieta et al. 2020; Lipton, 2018). Hence, explainability and transparency of ADMSs may promote trustworthiness and increase users’ fairness perception regardless of the system’s actual (computational) fairness (Ribeiro et al., 2016; Singh et al., 2018; Theodorou et al., 2017; Wortham et al., 2016).
Corporate ownership of automated vehicles: discussing potential negative externalities
Published in Transport Reviews, 2020
However, corporations will most likely be reluctant to comply with algorithmic transparency, on account of protecting trade secrets. Nevertheless, AI experts insist that anyone procuring AI technologies for use in the public sector should demand that vendors waive trade secrecy claims before entering into any agreements with the government (Whittaker et al., 2018, pp. 4–5). The justification for accountability and transparency rests on the “right to explanation”: AI developers and operators are required to provide meaningful information about the logic of processing (Edwards & Veale, 2017; Goodman & Flaxman, 2017). Among the mechanisms for transparency and oversight are rank-and-file employee representation on the board of directors, external ethics advisory boards, and the implementation of independent monitoring and transparency efforts. Companies need to ensure that their AI infrastructures can be understood from “nose to tail”, including their ultimate application and use. Importantly, third party experts should be able to audit and publish about key systems (Jobin, Ienca, & Vayena, 2019; Whittaker et al., 2018, p. 4). By making the route-finding algorithm transparent and explainable, governments could thereby detect whether the operator is manipulating the route-finding algorithm in their own interest and against the common good.