Explore chapters and articles related to this topic
Critical legal theory and encountering bias in tech
Published in Penny Crofts, Honni van Rijswijk, Technology, 2021
Penny Crofts, Honni van Rijswijk
On the other hand, facial recognition algorithms studied by Algorithmic Justice League founder Joy Buolamwini found that 80% of input images on which facial recognition algorithms were based were white, and 75% were male. The algorithms were 99% accurate in detecting male faces, but only 65% accurate at detecting the faces of black women.8 So, focusing on gender only is not likely to solve other intersectionality issues in AI.9 And while medical research is being accelerated by AI, medical trials have not been representative, with most participants being white, older, wealthy males.10 Infamously, in 2016, Microsoft launched Tay, an AI chatbot that Microsoft described as an experiment in “conversational understanding”.11 Tay was designed to be a social robot and to learn through engagement—the more people who chatted with Tay, the smarter Tay was meant to become. As soon as Tay was launched, people started tweeting @Tay with racist, misogynist conversation. Within a day, Tay was spouting so much hate-speech, the robot had to be put to sleep. More and more examples of bias and misuse of AI technologies are being revealed—from insurance and health algorithms that exclude vulnerable people from coverage, to police use of racist facial recognition technology—but despite these problems, there is little transparency and no oversight by government or international bodies.
A Brief History of Artificial Intelligence
Published in Ron Fulbright, Democratization of Expertise, 2020
In 2016, Microsoft released a similar bot named Tay for a Western audience on the Twitter, Kik, and GroupMe platforms. Tay was built to mimic a 19-year-old American girl and learn from its interactions on these platforms (Bright, 2016). However, anonymous users of the notorious troll sites 4chan and 8chan, were able to quickly identify some weaknesses in Tay’s “repeat after me” features and exploit them. Tay learned and began tweeting racist, misogynist, and other offensive comments within sixteen hours of its release. Microsoft quickly pulled the plug and deleted the most offensive tweets (Ohlheiser, 2016).
Ignorance by proxy
Published in Cathrine Hasse, Posthumanist Learning, 2020
Tay was trained in a domain with algorithms tied to machine learning. There is an overlap between some of the humanist learning paradigms in the learning sciences and the theories used in machine learning – in fact sometimes especially cognitive and behaviourist learning theories, as noted, have directly inspired theories of algorithmic learning. What Tay can teach us is that maybe the paradigms in the learning sciences have not really grasped human learning, but a kind of computational learning that works for machines.
Can you count on a calculator? The role of agency and affect in judgments of robots as moral agents
Published in Human–Computer Interaction, 2023
Sari R.R. Nijssen, Barbara C. N. Müller, Tibor Bosse, Markus Paulus
In past years, a small but significant body of literature on the topic of robot morality has emerged. Besides research in the domain of Artificial Intelligence (AI) on how to program robots with ethical values (e.g., Dignum, 2017), people’s perceptions of robot morality have been investigated as well. For example, people are averse to machines making moral choices (Bigman & Gray, 2018) and their trust in machines is violated more quickly than their trust in humans (Dietvorst et al., 2015). Despite such adversities, people blame a moral transgression on an algorithm (e.g., in the case of racist chatbot Tay) rather than on its programmer or user (Shank & DeSanti, 2018). Furthermore, people tend to trust human-like robots more in a collaborative setting (Natarajan & Gombolay, 2020; Złotowski et al., 2016) because they are perceived as being more intelligent (Schaefer et al., 2012). Lastly, in a line of research highly relevant to the current study, Malle and colleagues show that people generally expect robots to follow similar moral norms as humans, but the extent to which they blame robots differs depending on the context of moral decision-making. For example, robots are blamed more for moral outcomes in which they make non-utilitarian choices (Malle et al., 2019, 2015; Voiklis et al., 2016).
Artificial Intelligence Governance For Businesses
Published in Information Systems Management, 2023
Johannes Schneider, Rene Abraham, Christian Meske, Jan Vom Brocke
Supervised learning uses labeled training data. That is, each sample X is associated with a label y. Unsupervised learning consists of unlabeled training data. It typically addresses tasks like organizing data into groups (clustering) or generating new samples. While adaptivity might be a requirement in a dynamic environment, online systems, which continuously learn from observed data, are more complex from a system engineering perspective with multiple unresolved challenges (Stoica et al., 2017). System evolution must be foreseen and managed, which is not needed for offline systems that do not change. In the past, such dynamic systems have gotten “out of control” after deployment, e.g., Microsoft’s Tay chatbot had to be shut down after only a few hours of operation due to inappropriate behavior (Mason, 2016). Finally, an important concept is that of transparency of ML models. Transparent models are intrinsically human understandable, whereas complex black-box models such as deep learning require external methods that provide explanations that might or might not suffice to understand the model (Adadi & Berrada, 2018; Meske et al., 2022; Schneider & Handali, 2019).
The “humane in the loop”: Inclusive research design and policy approaches to foster capacity building assistive technologies in the COVID-19 era
Published in Assistive Technology, 2022
John Bricout, Julienne Greer, Noelle Fields, Ling Xu, Priscila Tamplain, Kris Doelling, Bonita Sharma
The notion of value-neutral, “objective” technology derived from machine learning has been invalidated with notable cases, such as Microsoft’s “Tay” chatbots “tutored” in racism from Twitter data. Machine learning-based interventions and tools concomitant with the development of AI and robotics bring greater predictive power and objectivity, broadly speaking, but they can also propagate unfair outcomes to marginalized communities because of faulty socio-technical problem formulation based on biased (human) causal theories and inferences (Martin et al., 2020). The iterative exchanges between human and machine that bias the machine’s algorithms and skew its performance may be a result of how people selectively label information (through a personal lens, randomly or through active learning), and the algorithm’s own process of information selection (Sun et al., 2020). Holding the machines harmless is to put too little emphasis on the consequences of their nonobjective learning process.