Explore chapters and articles related to this topic
Effects of Visual, Cognitive and Haptic Tasks on Driving Performance Indicators
Published in Gavriel Salvendy, Advances in Human Aspects of Road and Rail Transportation, 2012
Christer Ahlström, Katja Kircher, Annie Rydström, Arne Nåbo, Susanne Almgren, Daniel Ricknäs
It has become common to bundle modalities into “visual/haptic” and “talking/listening”. While it is true that haptic tasks often involve eye glances to aid the hand movements, the results shown here do not fully support a split along those lines. Here the haptic task was conducted without visual support, the eyes could therefore remain directed at the road. Most PIs showed more similarities between the cognitive and the haptic task than the visual and the haptic task. Interestingly, the PIs that were affected most similarly by the visual and the cognitive load were the lateral PIs in the fixed base simulator, whose validity possibly is most questionable. Therefore, it is suggested that haptic tasks without visual component be treated separately, and if they prove not to be very detrimental their increased deployment in HMI solutions should be investigated.
Gamified Approaches for Water Management Systems: An Overview
Published in Panagiotis Tsakalides, Athanasia Panousopoulou, Grigorios Tsagkatakis, Luis Montestruque, Smart Water Grids, 2018
Andrea Castelletti, Andrea Cominola, Alessandro Facchini, Matteo Giuliani, Piero Fraternali, Sergio Herrera, Mark Melenhorst, Isabel Micheel, Jasminko Novak, Chiara Pasini, Andrea-Emilio Rizzoli, Cristina Rottondi
Using persuasive technologies provides some advantages over traditional media, or human persuaders. The most important is ubiquity: thanks to the proliferation of mobile and wearable devices, smart meters, and home-devices interconnected by means of IoT technologies, people can be targeted at any place and at any time. The diversity of modalities is another important advantage: computers and electronic devices can deliver messages in a huge variety of modalities including text, audio, video, graphs, animations with a higher persuasion impact. Finally, computers can store and access huge amounts of information that enable them to present new facts, statistics, and references as no other media can.
Perceptual Impairments
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Julie A. Jacko, V. Kathlene Leonard, Molly A. McClellan, Ingrid U. Scott
In addition to the benefits already discussed that enable users with visual impairments to access technology, multimodal interfaces also provide superior support of error handling. More specifically multimodal interfaces have been shown to have the ability to avoid errors and recover more efficiently from errors when they do occur. This is a very important element of universal accessibility. Enabling users to use a technology includes not only being able to access the technology but also the ability to use it in an efficient manner. Reducing or avoiding errors is a major key to improving efficiency. A multimodal interface provides better error handling than a unimodal interface for several reasons. The following are factors that represent reasons why a multimodal interface may be better at error avoidance: Users will select the input mode considered less error prone for a particular context, which is assumed to lead to error avoidance.The ability to use several modalities permits users the flexibility to leverage their strengths by using the most appropriate modality. It is possible that the more comfortable the users, the less likely they are to make errors.Users tend to switch modes after systems errors. For instance, users are likely to switch input modes when they encounter a system recognition error.Users are less frustrated with error when interacting within a multimodal interface, even when errors are as frequent as in a unimodal interface. The reduction in frustration may be the result of the user feeling as though they have more control over the system because they can switch modes (Oviatt 1999).
A prediction model of multiple resource theory for dual task walking
Published in Theoretical Issues in Ergonomics Science, 2022
Eda Cinar, Shikha Saxena, Bradford J. McFadyen, Anouk Lamontagne, Isabelle Gagnon
According to original MRT model, any given task can utilize resources in three dimensions: 1) processing stages, 2) input/output modalities and 3) processing codes, as illustrated in Figure 1(A). Processing Stages refer to the steps that are inherent to all tasks. The model includes three stages: perception, cognition, and response (when engaged in a task, the elements of that task need to be perceived ‘perception stage’, interpreted ‘cognition stage’ and responded to ‘response stage’). Modalities are conceptualized as input and output modalities. Input modalities are the stimuli perceived in the perception stage and are divided in visual and auditory modalities. Output modalities are the channels where motor responses are formed based on the processing occurring in the cognition stage. These responses are categorized as manual and vocal modalities (Wickens, Sandry, and Vidulich 1983). Finally, codes are the means by which one manages the incoming information from the perception stage (encoding), to process in the cognition stage and to prepare for action in the response stage (decoding). Codes are labeled as verbal and spatial codes (Figure 1(A)). Figure 1B illustrates an example of task analysis using the MRT model as it applies to city driving. The performance of city driving requires encoding of spatial visual information in the perception stage, processing of spatial information in the cognition stage and decoding the spatial information to produce a manual response in the response stage (Wickens 2002). Hence, MRT model posits the existence of eight resources: Visual spatial, Visual verbal, Auditory spatial, Auditory verbal, Cognitive spatial, Cognitive verbal, Manual spatial, and Vocal verbal (Figure 1B).
Improving the Design of Ambient Intelligence Systems: Guidelines Based on a Systematic Review
Published in International Journal of Human–Computer Interaction, 2022
Juliana Damasio Oliveira, Júlia Colleoni Couto, Vanessa Stangherlin Machado Paixão-Cortes, Rafael Heitor Bordini
The authors of eight papers reported on the importance of inserting multimodal interaction (De Belen et al., 2019; Huxohl et al., 2019; Sakamoto et al., 2014; Spinsante et al., 2017), feedback (De Belen et al., 2019; Catenazzi et al., 2012; Cha et al., 2019), and interface (Goumopoulos & Mavrommati, 2020; Paredes et al., 2015) in ambient intelligence systems to improve the user interaction with the system. Some suggested modalities were: audio, text, voice, vibration, and visual.
Multimodal data fusion for systems improvement: A review
Published in IISE Transactions, 2022
Nathan Gaw, Safoora Yousefi, Mostafa Reisi Gahrooei
Oftentimes it can be difficult to fuse multiple modalities, due to disparities between the modalities. For instance, two sensors being fused for a prediction model may be in different forms (e.g., have different sampling rates, one is analog and another is digital, etc.). To ameliorate some of these disparities, early fusion can be used to extract information from each modality and fuse before model training.