Explore chapters and articles related to this topic
Visual Displays
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
Christopher M. Schlick, Carsten Winkelholz, Martina Ziefle, Alexander Mertens
The ability of several touch screen technologies to serve multitouch applications finds particular demand in cooperative product design and project management. Multitouch screens allow the precise measurement of more than one touch point on screen simultaneously. They can therefore identify complex gestures for advanced interaction. From the technologies just mentioned, the following can be used for multitouch purposes: capacitive screens, optical (infrared) screens, and more recently, resistive screens. More details can be found in Brown (2008), Chen, Cranton, and Fihn (2011), Jhuo, Wu, and Hu (2009), Maxwell (2007), Mertens et al. (2011), and Saffer (2008).
Controllers
Published in Russ Martin, Sound Synthesis and Sampling, 2012
Multi-touch enables a number of additional ways of interacting with onscreen displays. Pictures can be scaled by touching diagonally opposite corners and moving the fingers away from each other. Rotating the two fingers causes the picture to rotate. Fingers can do different things in different parts of the screen at once, which stops the screen being a one-function display, and turns it into something more gestural and multi-channel.
Detection of Affective States of the Students in a Blended Learning Environment Comprising of Smartphones
Published in International Journal of Human–Computer Interaction, 2021
Subrata Tikadar, Samit Bhattacharya
Touch interactions are generally completed in either of the two ways: general touch on any part of the screen except a virtual keyboard, and typing on a virtual keyboard. The behavior of the users may be different for these two ways of interaction. In the former way, users generally interact through tap, scroll, swipe, or multi-touch gesture. On the other hand, in case of typing interaction, users only tap. Moreover, the frequency of tap in case of typing is much higher compared to that of general touch interaction. A single computation model may not be sufficient for detecting the affective states of the users form the two type of touch interaction behaviors. Therefore, we propose a process model comprising two computational models, namely, Touch-Affect and Type-Affect. The Touch-Affect is responsible for detecting the affective state form general touch interaction behavior whereas the Type-Affect is responsible for detecting the affective state from typing behavior. The overall process model, in the form of a flow diagram, is shown in Figure 2.
Flexible wearable sensors - an update in view of touch-sensing
Published in Science and Technology of Advanced Materials, 2021
Chi Cuong Vu, Sang Jin Kim, Jooyong Kim
Multi-touch is a technology that enables a surface to recognize the presence of more than one point of contact at the same time. Multi-touch functionality, mainly based on the sensor arrays, allows performing multiple finger gestures, such as swipe, scroll, select, zoom in, and zoom out. These sensor arrays can be fabricated by the capacitive, resistive, triboelectric, or optical principles. However, many studies demonstrated the capacitive or resistive sensor arrays are best suited for multi-touch surfaces [26–28].