Explore chapters and articles related to this topic
Measurements and Assessment of Lighting Parameters and Measures of Non-Visual Effects of Light
Published in Agnieszka Wolska, Dariusz Sawicki, Małgorzata Tafil-Klawe, Visual and Non-Visual Effects of Light, 2020
Agnieszka Wolska, Dariusz Sawicki, Małgorzata Tafil-Klawe
Among the methods of recording eye movements, we can distinguish electro-oculography and video-oculography. Electro-oculography (EOG) is a method based on measuring the differences in the bioelectric ranges of the muscles located in the ocular area. Based on the amplitude of the computational signal, the distance between rapid changes in eye indicators (also called saccades) is determined. Movements of the eye relative to the surface electrodes placed around the eye produce an electrical signal that corresponds to eye position. Video-oculography (VOG) or eye-tracking is a non-invasive, video-based method of measuring horizontal, vertical, and torsional position components of the movements of both eyes. An eye-tracker can detect the presence, attention, and focus of the user.
Vestibular and Related Oculomotor Disorders
Published in Anthony N. Nicholson, The Neurosciences and the Practice of Aviation Medicine, 2017
Nicholas J. Cutfield, Adolfo M. Bronstein
For patients who report loud sound-induced disequilibrium, oscillopsia or vertigo, the eyes should be examined for torsional nystagmus while applying a continuous loud sound (100 dB) or with a Valsalva manoeuvre. A low-amplitude torsional nystagmus can be difficult to see clinically (opening the eyelids wide to view the scleral vessels can help) and is more reliably recorded with video-oculography. However, high-resolution tomography of the temporal bones is the most important investigation. Additionally, recording muscle activity induced by applying loud clicks (vestibular-evoked myogenic potential) is easier to induce due to enhanced bone transmission (Colebatch et al., 1998). Similarly, the audiogram will typically show an ‘air–bone gap’ at 1 kHz and below due to enhanced transmission of sound with the ‘bone’ stimulus, a kind of conductive hyperacusis.
Smart Eye-Tracking Sensors Based on Pixel-Level Image Processing Circuits
Published in Khosla Ajit, Kim Dongsoo, Iniewski Krzysztof, Optical Imaging Devices, 2017
Photo-oculography or video-oculography groups together a wide variety of eye movement recording techniques that involve the measurement of distinguishable features of the eyes under rotation/translation, such as the apparent shape of the pupil, the position of the limbus, and corneal reflections of a closely situated directed light source. Automatic limbus tracking often uses photodiodes mounted on spectable frames, as shown in Figure 7.4. This method requires the head to be fixed, e.g., by using either a head/chin rest or a bite bar.
Gaze Interaction With Vibrotactile Feedback: Review and Design Guidelines
Published in Human–Computer Interaction, 2020
Jussi Rantala, Päivi Majaranta, Jari Kangas, Poika Isokoski, Deepak Akkil, Oleg Špakov, Roope Raisamo
A number of methods have been used for tracking eye movements (Duchowski, 2007) and defining gaze position, or gaze vector (Hansen & Ji, 2010). The most common method is based on analyzing a video image of the eye, also known as video-oculography (VOG). For each captured frame, the tracking software detects a number of visual features, such as pupil size, pupil center, and so on. VOG-based trackers typically require a calibration before the gaze point can be estimated. During the calibration, the user looks at dots (usually five to nine), one at a time. The calibration dots and the corresponding sets of visual features are used by the tracking software to calculate the visual-features-to-gaze-point transformation that is then used to estimate gaze point based on the eye images. An additional corneal reflection from a near-infrared light source (often used to provide stable eye illumination) can help in compensating for head movements, thus improving the tracking accuracy. The tracker can also be mounted on the head, for example, by integrating it into eyeglass frames. Tracking gaze in 3D poses additional challenges, such as how to map the gaze vector to the real-life scene. Techniques include, for example, using a scene camera to recognize visual markers placed in the environment (see, e.g., Pfeiffer & Renner, 2014).