Explore chapters and articles related to this topic
Context-Aware Service Provision in Ambient Intelligence: A Case Study with the Tyche Project
Published in Bruno Bouchard, Smart Technologies in Healthcare, 2017
User preferences: The user peripheral preferences for specific peripheral devices are formulated with a Likert scale to classify the environment devices. Thus, each user gives some usability preferences toward the device peripherals with values as: user likes to use the peripheral device = 1, the user is neutral face to utilization of this peripheral device = 0, the user dislikes to use the peripheral device = –1. The sum of the user’s peripheral preferences for each device is calculated and used in the DCQ evaluation. These preferences can be used as a complementary tool to the physical capacity to determine the types of devices that the potential user will be able to employ. In the current version of Tyche project, five types of devices were used: keyboard, mouse, trackball, virtual keyboard and touch screen slider. Other kinds of devices can easily be added to the middleware’s ontology.
Human Brain-Computer Interface
Published in Alexa Riehle, Eilon Vaadia, Motor Cortex in Voluntary Movements, 2004
Gert Pfurtscheller, Christa Neuper, Niels Birbaumer
Besides neural networks and linear classifiers, hidden Markov models (HMMs) are also suitable in BCI research.7576 The HMM itself could be seen as a finite automata containing discrete steps and emitting a feature vector that depends on the current state of every time point, whereby each feature vector is modeled by Gaussian mixtures. The transition probabilities from one state to the other state are described using a transistor matrix. A one-state HMM was used to implement an EEG-based spelling device, also called the "virtual keyboard."76 The feature vector was composed of logarithmic band power values estimated in two bipolar EEG channels, recorded over the sensorimotor area during two types of motor imagery. The subject's task was to copy presented words (copy spelling) by selecting single letters out of a predefined alphabet. The structure of the virtual keyboard contains 5 decision levels for the selection of 32 letters and 2 further levels for confirmation and correction: in 5 successive steps the letter set was split into 2 equally sized subsets, until the subject selected a certain letter, and 2 further steps allowed the subject to confirm or correct this selection. In a first experiment, 3 able-bodied subjects achieved a spelling rate of 0.67 to 1.02 letters per minute.76 Current studies focus on improving the spelling rate — for instance, by considering the probability of each letter and by shortening the trial length.
Graphic User Interfaces for Communication
Published in Stefano Federici, Marcia J. Scherer, Assistive Technology Assessment Handbook, 2017
Maria Laura Mele, Damon Millar, Christiaan Erik Rijnders
In 2007, Majaranta and Räihä (2007) categorized text entry methods into four classes according to the input techniques: (1) gaze typing, (2) discrete gaze gestures, (3) gaze writing, and (4) eye switches. Gaze typing: Text entry by direct gaze pointing is an input method based on fixations on a virtual keyboard composed of letters, which can be selected one by one, using a switch, a blink, muscle activity, or dwell time. Dwell is typically set between 500 and 1000 ms for nonexpert users, and between 200 and 300 ms for expert users (Majaranta et al., 2009). A visual or an auditory feedback typically follows letter selection, while it is being shown in a separate text field, typically either on the top or on the bottom of the virtual keyboard. Text entry by direct gaze pointing is the most adopted method, but it can cause an overload of the visual system as it requires to constantly shift the visual attention from the virtual keyboard to the separate text field.Discrete gaze gestures: The direct pointing method allows a user to select a target character on the screen by gaze gestures that are based on a sequence of saccades followed by a brief fixation. Gaze gestures are based on relative changes in the direction of gaze; so, this method does not have accuracy and calibration issues (Drewes and Schmidt, 2007). As Majaranta points out, “gestures should be complex enough to differ from natural gaze patterns but still simple enough that people can easily learn and remember them” (Majaranta, 2012, p. 66).Continuous gaze writing: Even when pointing at a target on a screen, our eyes constantly make small movements on the fixation point. In order to accommodate these small movements, gaze writing methods uses the direction of continuous gaze gestures to select a certain choice on dynamic interfaces. The Communication by Gaze Interaction (COGAIN) association shows a list of eye-tracking systems used for text entry by gaze or gaze-based communication.* Even if continuous eye writing techniques are more efficient than the direct ones they require long training time and may still confuse novice users.Eye switches: Text entry via eye switches uses blinks, winks, or coarse eye movements combined with scanning techniques. Generally, in these systems, the alphabet is positioned on a matrix that is automatically scanned line by line. Once the user chooses the line containing the target letter, the user then scans each letter on the selected line. The user can then select the target by blinking on it when it is highlighted. Compared to the other methods, text entry via eye switches is slow (around 2–6 words per minute–wpm) but this is principally due to the combined scanning method.
Text entry rate of access interfaces used by people with physical disabilities: A systematic review
Published in Assistive Technology, 2018
Heidi Horstmann Koester, Sajay Arthanat
These interfaces present a virtual keyboard on the screen. To enter text, the user moves the mouse cursor to the desired target and either clicks or dwells there (We call these “cursor selection” interfaces to highlight the need for cursor movement as a part of text entry. This category does not include the tap-to-type OSK used on a tablet or smartphone.). Table 4 groups the text entry data for these 11 studies across four motor sites.
Do Changes in the Body-Part Compatibility Effect Index Tool-Embodiment?
Published in Journal of Motor Behavior, 2023
Aarohi Pathak, Kimberley Jovanov, Michael Nitsche, Ali Mazalek, Timothy N. Welsh
A total of four intervention tasks were developed for the present work, with each task involving virtual tool-use to a greater or lesser degree. The four intervention groups were the following: Virtual-Tangible, Virtual-Keyboard, Tool-Perception and Tool-Absent group. Of the four, two intervention tasks were designed to determine if tool-embodiment may emerge through a virtual tool-use experience and whether the type of human-computer interface influences the embodiment of a tool. These virtual interaction conditions involved 2 D environments, not immersive 3 D environments. In the Virtual-Tangible group, the tool-use task involved lifelike movements of a tool task where participants held a customized Wii™ tracking device. The manipulation of this device was translated to the position of a virtual arm and rake in a 2 D virtual environment. In the Virtual-Keyboard group, participants controlled the movement of the virtual arm and rake by pressing different keys on a standard computer keyboard. Because previous research has revealed that bimodal neurons respond both to visual and somatosensory input through virtual tool-use (Iriki et al., 1996), it was predicted that there would be evidence of tool-embodiment in the group that performed the tool-use task in the Virtual-Tangible group. Reason being, this Virtual-Tangible task produced both visual and proprioceptive sensory information and had a relatively consistent coupling between action production and sensory afference (i.e., different from a real-life tool interaction, but similar as the coupled visual motion of an arm as well as somatosensory information of actual limb movement remained). It was tentatively predicted that evidence of tool-embodiment would not emerge in the Virtual-Keyboard group because of the decoupling between the action (key presses) and sensory effects of those actions (visual motion of the virtual limb and rake, but no somatosensory information of actual limb movement). It was possible, however, that tool-embodiment might emerge in this virtual-keyboard group because there was a consistent action-perception coupling as specific muscle contractions (leading to keypresses) consistently generated specific visuosensory (directional arm-rake motions in the virtual environment) and somatosensory events (proprioceptive and tactile information from the finger movements and keypresses).