Explore chapters and articles related to this topic
A Haptic Fish Tank Virtual Reality System for Interaction with Scientific Data
Published in Philip D. Bust, Contemporary Ergonomics 2006, 2020
The idea of multimodal interaction in Human Computer Interaction has been shown as a important approach to improve user performance for a variety of tasks. An ideal VR system tries to make use of all sensory modalities to create a realistic environment as well. As one of crucial sensorial modalities, haptics means both force feedback (simulating object hardness, weight, and inertia) and tactile feedback (simulating surface contact geometry, smoothness, slippage, and temperature). Providing such sensorial data requires desktop or portable special-purpose hardware, usually a robot, called haptic interfaces. For example, the PHANTOMR from SensAble Technologies is a widely used haptic interface that involves a robotic armature (Massie and Slisbury, 1994).
Multimodal Interfaces
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
The design of new multimodal systems has been inspired and organized largely by two things. First, the cognitive science literature on intersensory perception and intermodal coordination during production is beginning to provide a foundation of information for user modeling, as well as information on what systems must recognize and how multimodal architectures should be organized. For example, the cognitive science literature has provided knowledge of the natural integration patterns that typify people’s lip and facial movements with speech output (Benoit et al. 1996; Ekman 1992; Ekman and Friesen 1978; Fridlund 1994; Hadar et al. 1983; Massaro and Cohen 1990; Stork and Hennecke 1995; Vatikiotis-Bateson et al. 1996), and their coordinated use of manual or pen-based gestures with speech (Kendon 1980; McNeill 1992; Oviatt, DeAngeli, and Kuhn 1997). Given the complex nature of users’ multimodal interaction, cognitive science has and will continue to play an essential role in guiding the design of robust multimodal systems. In this respect, a multidisciplinary perspective will be more central to successful multimodal system design than it has been for traditional GUI design. The cognitive science underpinnings of multimodal system design are described in Section 18.5.
Mobile multimodal construction data entry: Moving forward
Published in Manuel Martínez, Raimar Scherer, eWork and eBusiness in Architecture, Engineering and Construction, 2020
Multimodal interaction is the integration of visual and speech interfaces through the delivery of combined graphics and speech, on handheld devices (Hjelm 2000). This technology enables more complete information communication and supports effective decision-making. It also helps to overcome the limitations imposed by the small screen of mobile devices. Most of the PDAs and Pocket PCs today have a small screen size of about 3.5 × 5, and mobile phones have even smaller screens (Pham & Wong 2004). The small screen size, and the need to use a pen to enter data and commands, presents an inconvenience for field users – especially if their hands are busy using other field equipment, or instruments.
System Development and Evaluation of Human–Computer Interaction Approach for Assessing Functional Impairment for People with Mild Cognitive Impairment: A Pilot Study
Published in International Journal of Human–Computer Interaction, 2023
Tian Su, Zixing Ding, Lizhen Cui, Lingguo Bu
Non-hardware-dependent HCI technology has been explored for cognitive rehabilitation in several countries. Keynes et al. utilized a cell phone application that combined mobile augmented reality and voice interaction to aid in medication recognition and reminders, supporting medication adherence and cognitive therapy for AD patients (Kanno et al., 2019). Henry et al. created a mobile device application using MobileNet and Single Shot detector algorithms to detect drug intake and prevent medication errors in AD patients (Henry et al., 2022). Eun et al. designed a web-based game that utilized a touch screen interface to facilitate interactive tasks, helping patients with hand movements and brain exercises (Eun et al., 2022). Non-hardware-dependent interaction technologies offer portability and wide applicability, but they require careful consideration in terms of software interaction design. Designers must incorporate multimodal interaction, as well as voice and video elements to facilitate interaction for patients with cognitive impairment (Berrett et al., 2022).
Integrating Mobile Multimodal Interactions based on Programming By Demonstration
Published in International Journal of Human–Computer Interaction, 2021
Zouhir Bellal, Nadia Elouali, Sidi Mohamed Benslimane, Cengiz Acarturk
From the perspective of Human-Computer Interaction, multimodal interaction should aim at enabling natural forms of interaction by simultaneously providing the user with various modalities to interact with the system, at the input level (e.g., speech, gesture, multitouch gesture) as well as the output level (e.g., graphics, sound, synthesized speech) (Bubalo et al., 2016). For example, it is more suitable for a user to interact with freehand interaction modality such as voice modality to control an online map application while driving, rather than using tactile (i.e., touchscreen) modality (e.g., one-finger swipe gesture). Providing a balance between those two modalities is not only beneficial for enhanced accessibility (Stephanidis et al., 2019), but it also provides greater convenience (e.g., natural input mode recognition) and flexibility (e.g., adaptation to user context, to tasks or to users preferred interaction modalities) (Lalanne et al., 2009).
Context-Aware Augmented Reality Using Human–Computer Interaction Models
Published in Journal of Control and Decision, 2022
Ying Sun, Qiongqiong Guo, Shumei Zhao, Karthik Chandran, G. Fathima
(Kim et al., 2021) express the Multimodal Interaction Systems (MIS) based on the internet and augmented reality. This paper provides studies suggested by various IoT and AR Multimodal Interaction systems (MIS). In total, a strict literature review protocol defined and reviewed 23 trials. The results are reported and explored to analyze multimodal Interaction system (MIS) designs, the interaction between the component method, the modalities of interaction input/output, and open-ended analysis issues and identify potential MI-Developer testing opportunities.