Explore chapters and articles related to this topic
Measuring Human Performance in the Field
Published in Haydee M. Cuevas, Jonathan Velázquez, Andrew R. Dattel, Human Factors in Practice, 2017
Igor Dolgov, Elizabeth K. Kaltenbach, Ahmed S. Khalaf, Zachary O. Toups
Muir’s (1994) model of human trust differentiates between trust, confidence, predictability, and accuracy. Predictability is the factor upon which an operator can make a prediction about future system behavior. The operator has a level of confidence in their prediction of what the system will do. The accuracy of that prediction can be assessed by comparing the prediction to how the system actually behaved. All of these concepts are distinct, yet related to each other, and are considered when attempting to measure a user’s trust in a system.
Making big things happen
Published in James P Trevelyan, Learning Engineering Practice, 2021
Procurement and logistics, manufacturing, installation, commissioning, and acceptance testing use up the bulk of the capital investment in any project. Predictability is the key in this stage: engineers are under a great deal of pressure to make sure that everything is completed safely and on time, within the planned budget, and delivered with the required technical performance.
A Situation Awareness Perspective on Human-AI Interaction: Tensions and Opportunities
Published in International Journal of Human–Computer Interaction, 2023
Jinglu Jiang, Alexander J. Karran, Constantinos K. Coursaris, Pierre-Majorique Léger, Joerg Beringer
As mentioned earlier, automation adds complexity to the system so that users may feel challenged to understand the system’s view of reality or the decisions it made. AI is inherently complex, and the extant literature has investigated various elements that may contribute to different types of complexity. For example, factors such as system components, features, objects, the interdependence of these components, degree of interactions, system dynamics like the pace of status change, and predictability of the dynamics contribute to system-level complexity (Endsley, 2016; Tegarden et al., 1995; Windt et al., 2008). An over-complex system may result from feature creep, which increases users’ mental workload and harms task performance since the entire system becomes difficult to understand (Page, 2009). However, a complex system does not necessarily mean that it is complex to use. For example, a self-driving car has complex automobile mechanics (e.g., more sensors, information fusion mechanisms, and diagnostic components), but the operation can be much less complex than a traditional car.