Explore chapters and articles related to this topic
20 Years of Automation in Team Performance
Published in Mustapha Mouloua, Peter A. Hancock, James Ferraro, Human Performance in Automated and Autonomous Systems, 2019
Paul A. Barclay, Clint A. Bowers
Additionally, the effect of automation on team processes seems to be directly related to the type of task that is being automated and the role of the team member performing that task. Almost universally, when physical demand and effort are automated, team members redouble that workload into more explicit communication or coordination (Bowers et al., 1995; Bowers et al., 1998; Jentsch & Bowers, 1996). However, when tasks involving information acquisition and analysis are automated, communication of extraneous information is reduced and team performance improves overall (Gould et al., 2009; Wright & Kaber, 2005). However, efforts must be taken in order to ensure that this reduction in communication is not also accompanied by a reduction in vigilant information seeking (Mosier et al., 2001). As advances in technology enable a greater proportion of information to be acquired and analyzed through automated systems, the need for effective countermeasures to automation bias will increase as well.
Autonomous ships, ICT and safety management
Published in Helle A. Oltedal, Margarera Lützhöft, Managing Maritime Safety, 2018
There are four ‘jokers’ in the human aspects of automation. A joker is an issue that does not seem resolvable by traditional design, and even a management solution appears difficult to find (Sherwood Jones & Earthy, 2016). These issues have neither been discussed, nor investigated for marine automation or navigation. The jokers are:Risk compensation, where the performance gains intended for increased safety are used to corporate commercial advantage. Compensation by individuals is well known in, for example, road traffic research.Automation bias, where people believe the automation when they should not, and frequent errors of omission in manual operations are replaced by major errors of commission in automated operations.Moral buffering, where remoteness and/or algorithms lead people to act in an inhumane manner.Affect dilemma, where people attribute personality to automation – probably inappropriately, but unavoidably. We see it in how people try to reason with automation and try to relate to it at work from a safety perspective. We already make an effort to avoid upsetting Siri and Alexa.
Augmented Reality Guidance for Control Transitions in Automated Driving
Published in Alexander Eriksson, Neville A. Stanton, Driver Reactions to Automated Vehicles, 2018
Alexander Eriksson, Neville A. Stanton
Mosier et al. (1996) found that automation bias occurs not only for untrained operators but also for experienced pilots, suggesting that automation bias is a persistent problem for any support system. In highly automated driving, an error of commission could lead to dangerous situations (Stanton and Salmon, 2009) when the system suggests an unsafe action in a take-over scenario. For example, when the system falsely instructs the driver to change lanes whilst the target lane is occupied by other vehicles. Parasuraman et al. (2008) argue that eye-tracking is a useful tool to assess complacency or automation bias. They also mention several studies (Manzey et al., 2006; Metzger and Parasuraman, 2005; Thomas and Wickens, 2004) which show that attention moves away from the primary task when operators are complacent. Similarly, Langois and Soualmi (2016) argued that to give clues about ‘what the driver possibly detected and analysed’, eye-tracking should be used (p. 1578).
ChatGPT: More Than a “Weapon of Mass Deception” Ethical Challenges and Responses from the Human-Centered Artificial Intelligence (HCAI) Perspective
Published in International Journal of Human–Computer Interaction, 2023
Alejo José G. Sison, Marco Tulio Daza, Roberto Gozalo-Brizuela, Eduardo C. Garrido-Merchán
The Internet’s game-changing power is shown, among others, in driving down the costs of reproducing and distributing digital content. Similarly, ChatGPT has lowered the marginal costs of producing new and original human-sounding texts to practically zero (once the costs of construction, training, maintenance, and so forth are covered) (Klein, 2023) leading to greater efficiency (Kooli, 2023). Further, the model is very user friendly (user experience or UX) or accessible (Kooli, 2023). responding to natural language prompts, hardly requiring any user training (user interface or UI). ChatGPT presents a ready-to-use uniquely synthesized text, not links to references like search engines. It can even be prompted to follow a particular language style of a period or an author, “mimicking creativity” (Thompson, 2022) and customized to personal needs or preferences (Kooli, 2023). Because ChatGPT produces highly coherent, natural-sounding, and human-like responses, users find them convincing and readily trust them, even if inaccurate. The “automation bias,” occurring when humans blindly accept machine responses as correct, without verifying or even disregarding contradictory information is extensively documented. After all, machines are objective, do not grow tired or get emotional; they have instant access to ever greater information (Metz, 2023a).
Design Thinking Framework for Integration of Transparency Measures in Time-Critical Decision Support
Published in International Journal of Human–Computer Interaction, 2022
Paul Stone, Sarah A. Jessup, Subhashini Ganapathy, Assaf Harel
Another bias that influences reliance is automation bias. Automation bias is the tendency of users to over-rely on recommendations from automation, forgoing cognitive processing, even the information received may be contradictory (Mosier & Skitka, 1996). Automation bias most likely occurs when users are trying to conserve mental resources during instances of high workload, during complex, time-critical situations (e.g., command and control), when the operator receives confusing information from the agent, or the user is not properly trained (Cummings, 2004; Goddard et al., 2012). Consequently, automation bias can decrease SA and lead to complacency, possibly leading to catastrophic issues in areas such as patient safety (Schulz et al., 2016) and aviation (Jones & Endsley, 1996). In order to reduce instances of automation bias, researchers have recommended training, providing users with the agent’s reasoning process, and determining an appropriate level of automation for the agent to ensure the human remains in-the-loop enough so that SA is maintained (Cummings, 2004; Goddard et al., 2012). Additional factors related to the environment or situation, such as SA, workload, and time constraints can also influence trust and reliance (Lee & See, 2004; Lewis et al., 2018).
Explainable Artificial Intelligence: Objectives, Stakeholders, and Future Research Opportunities
Published in Information Systems Management, 2022
Christian Meske, Enrico Bunde, Johannes Schneider, Martin Gersch
Different risks exist regarding the use of AI systems. A major potential problem is “bias”, which comes in different facets. In certain situations, humans have a tendency to over-rely on automated decision-making, called “automation bias”, which can result in a potential failure to recognize errors in the black box (Goddard et al., 2012). As an example, medical doctors ignored their own diagnoses, even when they were correct, because their diagnosis was not recommended by the AI system (Friedman et al., 1999; Goddard et al., 2011). Furthermore, automation bias can foster the process of “deskilling”, either because of the attrition of existing skills or due to the lack of skill development in general (Arnold & Sutton, 1998; Sutton et al., 2018). Such problems highlight the overall risk of inappropriate trust of humans toward AI (Herse et al., 2018).