Explore chapters and articles related to this topic
How Do I Tell the Difference between Good AI and Bad?
Published in Carmel Kent, Benedict du Boulay, AI for Learning, 2022
Carmel Kent, Benedict du Boulay
Each of the five paradigms in Figure 1.3 represents a range of approaches instead of representing a single unified realm. The difference between these paradigms more often lies in where their focus is rather than anywhere else. For example, the ‘human in the loop' paradigm, which came out as a pushback to the two upper paradigms, is focused on identifying to what extent and when in the AI development and operational process, human actors take a role other than mere users. When looking at some of the newest technological advancements in AI (such as the automation of ML modelling frameworks, e.g. the feature engineering process that deep learning approaches have automated), the ‘human in the loop' approach would focus on identifying the parts in the AI development process that are still left in human hands. Those parts could be, for example, a human expert labelling the outcomes as ‘good' or ‘bad' in a supervised ML algorithm, or a human making the final decision after being supported by an AI system. Regardless of how and when the human is participating, the human-in-the-loop paradigm assumes an AI dominance in the process, and its focus is on where and how a human actor comes in.
Autonomous Robots and CoBots
Published in Antonio Sartal, Diego Carou, J. Paulo Davim, Enabling Technologies for the Successful Deployment of Industry 4.0, 2020
Miguel Ángel Moreno, Diego Carou, J. Paulo Davim
A CPS (Lee, 2006; Baheti and Gill, 2011) is an essential concept in I4.0 referring to entities where “the physical and virtual world grow together” (Thoben, Wiesner and Wuest, 2017). It refers to a system that collects data of itself and environment, processes and evaluates these raw data, exchanges information with other systems, makes local decisions and initiates actions by itself. To this end, it counts on sensors, wireless communication, computer processing and actuators (Klocke et al., 2011; Wang, Törngren and Onori, 2015). In addition to the previous features, there are some other interesting aspects of CPS (Cengarle et al., 2013): control algorithms for self-organization, adaptability to changing conditions, new efficient communications across cross domains, embedded and IT dominated systems and human outside or inside the loop. For the latter, autonomous robots and collaborative robots (CoBots) are examples of the absence and presence of the human in the loop, respectively.
Intelligent Manufacturing Systems, Smart Factories and Industry 4.0: A General Overview
Published in Kaushik Kumar, Divya Zindani, J. Paulo Davim, Digital Manufacturing and Assembly Systems in Industry 4.0, 2019
Future manufacturing systems will also have to deal with human–machine collaboration. Although machines are unmanned, human involvement will still be the main issue. Machines will perform speech recognition, computer vision, machine learning, problem solving, etc. Machine learning with human intervention (human-in-the-loop machine learning) will allow to improve the decision-making process. Machines will assist humans with every job for various roles in manufacturing suites to deal with dynamic business requirements.
Transitioning to Human Interaction with AI Systems: New Challenges and Opportunities for HCI Professionals to Enable Human-Centered AI
Published in International Journal of Human–Computer Interaction, 2023
Wei Xu, Marvin J. Dainoff, Liezhong Ge, Zaifeng Gao
In the AI community, the research on hybrid intelligence can basically be divided into two categories. The first category is to develop human-in-the-loop AI systems at the system level, so that humans are always kept as a part of an AI system (e.g., Zanzotto, 2019). For instance, when the confidence of the system output is low, humans can intervene by adjusting input, creating a feedback loop to improve the system’s performance (Zanzotto, 2019; Zheng et al., 2017). Human-in-the-loop AI systems combines the advantages of human intelligence and AI, and effectively realizes human-AI interactions through user interventions, such as online assessment for crowdsourced human input (Dellermann, Calma, et al., 2019; Mnih et al., 2015), user participation in training, adjustment and testing of algorithms (Acuna et al., 2018; Correia et al., 2019).
Speciesism and Preference of Human–Artificial Intelligence Interaction: A Study on Medical Artificial Intelligence
Published in International Journal of Human–Computer Interaction, 2023
Weiwei Huo, Zihan Zhang, Jingjing Qu, Jiaqi Yan, Siyuan Yan, Jinyi Yan, Bowen Shi
Second, we attach the significance of promoting patients’ trust in medical AI. When AI intervenes in the healthcare field, doctors should provide medical exploration and risk information so that patients can know about how AI works, its benefits as well as drawbacks, and even the limitations of the results obtained by AI to help achieve the goals of trustworthy AI. Furthermore, human intuition and cognitive resilience can strengthen the AI’s capabilities and contribute to algorithm robustness (Holzinger, 2021). We should highlight “human-in-the-loop,” build user-centric AI-enabling systems and promote human–computer interaction as well as transparent communication to foster the development and maintenance of trust (Aoki, 2021; Holzinger et al., 2022).
“Is It Legit, To You?”. An Exploration of Players’ Perceptions of Cheating in a Multiplayer Video Game: Making Sense of Uncertainty
Published in International Journal of Human–Computer Interaction, 2023
Another possibility to limit relentless mutual surveillance would be to evaluate the accuracy of single reports comparing them with the feedback given by automatic systems. Reports might be rated in terms of accuracy and be stored among the statistics of the player—as a social deterrent for false hackusations. In line with the human-in-the-loop approach, according to which people are involved in the improvement of artificial intelligence systems (Kamar, 2016), this constant comparison might also help the developers adjust and correct their systems, which may learn from human judgments through a constant feedback loop.