Explore chapters and articles related to this topic
Toward an Information Society for All
Published in Constantine Stephanidis, User Interfaces for All, 2000
Constantine Stephanidis, Gavriel Salvendy, Demosthenes Akoumianakis, Albert Arnold, Nigel Bevan, Daniel Dardailler, Pier Luigi Emiliani, Ilias Iakovidis, Phill Jenkins, Arthur I. Karshmer, Peter Korn, Aaron Marcus, Harry J. Murphy, Charles Oppermann, Christian Stary, Hiroshi Tamura, Manfred Tscheligi, Hirotada Ueda, Gerhard Weber, Juergen Ziegler
Actions in this area should aim to establish a basis for designing for alternate interaction modalities and combinations of modalities, as well as to demonstrate the benefits of developing multimodal and multimedia systems for communities, groups, and individual users. The type of actions envisaged include basic research, applied research, and technological development and demonstration. Examples of RTD activities include, but are not limited to: Experimental studies on design for alternative modalities.Development and validation of a taxonomy of modality combinations in computer-based interactive software (detailing issues, such as modality compatibility, modality expressiveness, redundancy) to guide toward effective multimodal interaction design.Assessment of the usefulness of multimedia in specific application domains and contexts of use.Experimentation with alternative interaction design (e.g., nonvisual).Demonstrators of advanced multimodal interaction techniques (e.g., gestures, speech recognition, tactile interaction).
From Automation to Interaction Design
Published in Guy A. Boy, The Handbook of Human-Machine Interaction, 2017
Many methods were developed and used to perform cognitive task analysis (CTA) and cognitive function analysis (CFA) in order to identify cognitive demands that affect resources such as memory, attention and decision-making (Klein, 1993; Schraagen, Chipman and Shalin, 2000; Boy, 1998). These methods should be adapted to study usability and usefulness of multimodal HMS. They should be able to tell us what kind of modality is appropriate for a given task or function in a given context. Should it be adaptable or adaptive? Should it be associated with another modality to make signal more salient to human operators? We anticipate that these methods will focus on the definition of context. This is why storyboard prototypes first and human-in-the-loop simulations are mandatory to support evaluations of multimodal HMS. Context awareness plays also an important role in priority management for example (Funk, 1996; Funk and Braune, 1999) on the flight deck especially after an alarm (Boucek, Veitengruber and Smith, 1977) or an interruption (McFarlane and Latorella, 2002). Multimodal interfaces should be able to provide appropriate context awareness. Smith and Mosier (1986) provided user interface design guidelines to support the flexibility of interruption handling, but there is more to do on context sensitivity using multimodal interfaces. More recently, Loukopoulos, Dismukes and Barshi (2009) analyzed the underlying multitasking myth and complexity handling in real-world operations. As a matter of fact, flight deck multimodal interfaces should be intimately associated with checklists and operational procedures (Boy and De Brito, 2000). Paper support could also be considered as an interesting modality.
Multimedia User Interface Design
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
A message is conveyed by a medium and received through a modality. A modality is the sensory channel that we use to send and receive messages to and from the world, essentially our senses. Two principal modalities are used in human–computer communication as follows: Vision: All information received through our eyes, including text and image-based mediaHearing: All information received through our ears, as sound, music, and speech
Multi-modal classifier fusion with feature cooperation for glaucoma diagnosis
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2019
Nacer Eddine Benzebouchi, Nabiha Azizi, Amira S. Ashour, Nilanjan Dey, R. Simon Sherratt
Today, many real-world applications require multimodal data processing. More and more information collected from the real world is composed by nature consisting of data with different modalities. The term multimodality refers to the use of several modalities for carrying out the same task. A modality is a particular concrete form of a communication mode (visual mode, sound mode, gestural mode, etc.). For example, noise, music, speech are modalities of the sound mode. Theoretically, a multimodal computer system is a system capable of integrating several modalities (even if it integrates only one mode). The use of a multimodal approach can give not only better performance but also more robustness when one of these modalities is acquired in a noisy environment. Because of the rich characteristics of natural phenomena, it is rare that a single modality provides a complete knowledge of interest phenomenon.
What Drives User Stickiness and Satisfaction in OTT Video Streaming Platforms? A Mixed-Method Exploration
Published in International Journal of Human–Computer Interaction, 2023
Sridevi Periaiya, Ajith T. Nandukrishna
Modality refers to different methods of presenting media information (e.g., audio or pictures) that engages multiple facets of the human senses (e.g., seeing, hearing). The presentation of content in multiple modalities is both convenient and cognitively and perceptually meaningful (Sundar & Limperos, 2013). Modalities offered by OTT video-streaming platforms make consumers feel closer to reality by being closer to the content. This is made possible when customers have a positive outlook on the quality of content delivered (Sundar & Limperos, 2013). The audience’s emotional response to the content that is provided by a platform can be significantly influenced by the modal experience (Sundar, 2008). As a result, it can affect satisfaction.
Trade-offs in the design of multimodal interaction for older adults
Published in Behaviour & Information Technology, 2022
Gianluca Schiavo, Ornella Mich, Michela Ferron, Nadia Mana
Privacy and context of use. The context of use should be carefully considered when designing multimodal technology (Neves et al. 2015; Rico and Brewster 2010a, 2010b): older people have privacy and social acceptability concerns about using some modalities in public spaces (as in the case of voice commands or mid-air gestures (Rico and Brewster 2010a)). However, one of the advantages of multimodal interaction is the possibility of using one modality rather than the other according to the specific context (e.g. gestures instead of voice commands in noisy environments).