Explore chapters and articles related to this topic
Vision
Published in Anne McLaughlin, Richard Pak, Designing Displays for Older Adults, 2020
One question might be “how large is large enough?” Fortunately, there are rules one can use to guide the design of a display. Font size depends on the resolution of the screen, but visual angle is a display-agnostic measure that can be used to check the size of text. The visual angle measurement is the number of degrees the text takes up on the retina: it is the size of the arc as projected onto the retina. Small text, close up, will have the same visual angle as larger text farther removed. To properly estimate the visual angle, one needs to know the typical viewing distance of the display. For example, is the display a phone, flexibly held in the hand? Or is the display a kiosk, held stable at a location? Or a computer screen in use for much of the day by a person seated in a chair? The equation for calculating a visual angle is VisualAngle=2⋅tan−1((ObjectSize/2)/ObjectDistance)
Displays and Controls
Published in R. S. Bridger, Introduction to Human Factors and Ergonomics, 2017
The ability to resolve detail depends on the accommodation of the lens of the eye, the ambient lighting and the visual angle—the angle subtended at the eye by the viewed object. Visual angle is a more useful concept than absolute object size since it takes into account both the size of the object and its distance from the viewer (see Chapter 10). Under good lighting, a minimum visual angle of 15 min of arc is recommended for displays, although 20 is preferable (this is considerably greater than the values quoted in Chapter 10—the minimum angle for detection of a stimulus is much smaller than the minimum angle for perception of detail and the extraction of useful information). If the viewing distance is known, then minimum target sizes can be specified. For alphanumeric characters, for example
Assessing situation awareness in an unmanned vehicle control task
Published in Michael A. Vidulich, Pamela S. Tsang, John Flach, Advances in Aviation Psychology, Volume 2, 2017
Joseph T. Coyne, Ciara M. Sibley, Samuel S. Monfort
Although there is active debate over the different approaches and thresholds for defining a fixation (Collewijn & Kowler, 2008; Holmqvist et al., 2011), many researchers may not be aware of the variety of techniques and rely on their eye tracking software to select a method to compute fixations. Indeed, all too often researchers fail to report the specific techniques or thresholds employed to calculate fixations, and even those who do cannot rely on standardized guidelines because they do not yet exist. The studies discussed previously are unfortunately a good representation of the variety of thresholds used and lack of complete reporting methods. Specifically, of the four studies described above, three different minimum fixation durations were used: 50 ms (Ratwani et al., 2010), 100 ms (Moore & Gugerty, 2010) and 150 ms (Van de Merwe et al., 2012); one did not report a minimum duration (Gartenberg et al., 2014). Furthermore, the reported maximum distance thresholds varied from one to two degrees of visual angle, and one reported using 30 pixels but failed to provide the distance to the monitor so it could not be converted to visual angle. Specifying the visual angle in eye tracking research is important since it allows comparison of results across studies with different viewing distances, monitor sizes and monitor pixel densities. Furthermore, two of the four studies presented were unclear on the algorithms used to compute fixations, and of the remaining two, one used a velocity algorithm and the other used a centroid dispersion algorithm.
A One-Point Calibration Design for Hybrid Eye Typing Interface
Published in International Journal of Human–Computer Interaction, 2022
Zhe Zeng, Elisabeth Sumithra Neuer, Matthias Roetting, Felix Wilhelm Siebert
The text entry interface is based on an octagon-like layout with eight clusters that allow users to input different characters (see Figure 1, left). As Lutz et al. (2015) reported, the accuracy of gaze data decreased significantly from the middle to the edge of the screen after one-point calibration, i.e., the interactions in the central area of the interface are registered with higher accuracy. Hence, a larger number of objects can be distinguished in the first stage of detection which is placed more central in the interface, than in the second stage which is placed on the outside of the interface. In addition, Zeng et al. (2020) found that the circle-layout interfaces containing 6 and 8 linear moving objects achieved good detection rate. Therefore, in this study, eight clusters are designed in the first stage. There are four character tiles in each cluster. Regarding the direction of movement, studies have shown that the detection accuracy for smooth pursuit eye movements is influenced by the movement direction. Horizontal and vertical directions are more robustly detected than diagonal directions (Ke et al., 2013; Krukowski & Stone, 2005). Thus, in order to achieve a more robust identification of gaze directions, characters in each cluster are designed to move along the horizontal and vertical axis, e.g., in cluster “ABCD,” A moves left, B moves up, C moves right, and D moves down. In this study, 1° visual angle corresponds to 39 pixels at a distance of 60 cm from the user to the screen.
Automatic visibility evaluation method for application in virtual prototyping environment
Published in International Journal of Computer Integrated Manufacturing, 2019
Weiwei Wu, Xiaodong Shao, Huanling Liu
Most of the available commercial ergonomics software products mainly focus on the view-region factor and pay marginal attention to the influence of an object’s visual-distance on its visibility. According to common experience, the visibilities of objects with different visual-distances vary. Therefore, the visibility evaluation results without consideration of the visual-distance factor are unreliable. Most of the available ergonomics literatures provide only the recommended visual-distance according to the type of work task. This cannot satisfy the requirements of the visibility evaluation process. It is also noteworthy that the visibility of observed-objects in different sizes differ from each other under identical visual-distance circumstance. Apparently, considering only the visual-distance factor is insufficient. Therefore, in order to consider both the influence of the visual-distance factor and the object-size factor, the visual-angle factor is defined by integrating the above two factors. The visual-angle refers to the angle between the two lines that connect the eye and the two endpoints of the object’s size range (as shown in Figure 10).
Perception-action coupling in complex game play: Exploring the quiet eye in contested basketball jump shots
Published in Journal of Sports Sciences, 2018
André Klostermann, Derek Panchuk, Damian Farrow
Synchronized to the movement phases, the quiet eye – defined as the last fixation (i.e., anchoring of the gaze1Different from earlier quiet-eye research, the quiet-eye fixation was not calculated by applying the functionality of a dispersion-based algorithm. This approach calculates fixations by considering the visual angle the point of gaze moves in a circular fashion. Since in the current study we were not able to calculate the visual angle because of missing distance information between the eye and the objects in space (for more information see, e.g., Duchowski, 2007), in addition to the minimum-duration requirement, we defined the quiet-eye fixation as anchoring at objects without taking the visual angle into account. However, when estimating the visual angle, the range of foveal vision (>3°) might have been exceeded. for at least 100 ms) at the basketball hoop before the shot phase (cf. Vickers, 2007) – was coded. The offset was coded when the fixation cross deviated off the fixated position for greater than 100 ms. The onset and the offset of the quiet eye were calculated as relative values in relation to the beginning of the shot phase. Thus, negative values denote moments in time before and positive values denote moments in time after the final extension of the shooting arm. Furthermore, the quiet-eye duration was calculated as the difference between quiet-eye offset and quiet-eye onset and, in addition, the relative quiet-eye duration by dividing the absolute values by the total movement time (i.e., the sum of the jump and shot phases). Regarding a discussion as to whether the critical object (i.e., the basketball hoop in the jump-shooting task) has to be visible throughout the quiet-eye period we further analysed the quiet-eye data with regards to the occlusion phase (labelled as QEOcc). Thus, for trials with occlusion durations of at least 100 ms, the quiet eye was recalculated (on average 30.7% of all trials) by replacing the quiet-eye offset with the respective occlusion onset. However, if this new duration was too short (<100 ms) the quiet eye was calculated as the fixation after the occlusion offset. Consequently, the original quiet-eye onset was retained in the former and the original quiet-eye offset in the latter case. As a result, we had to exclude on average 5.3% of all trials from the further analyses of the QEOcc because of insufficient durations.