Explore chapters and articles related to this topic
Vision
Published in Alfred T. Lee, Vehicle Simulation, 2017
The ability to detect the disparity in depth of these images is known as stereo acuity. About 97% of college-aged individuals can detect depth differences of two objects with as small as 2.3 arc min of disparity and about 80% can detect depth differences as small as 30 arc s of disparity (Coutant and Westheimer, 1993). Stereoacuity should not be confused with stereopsis, which is the perceptual experience of depth that results from binocular disparity.
Three-dimensional display systems
Published in John P. Dakin, Robert G. W. Brown, Handbook of Optoelectronics, 2017
Extracting three-dimensional information about the world from the images received by the two eyes is a fundamental problem for the visual system. In many animals, perhaps, the best way of doing this comes from the binocular disparity that results from two forward facing eyes having a slightly different viewpoint of the world [5]. The binocular disparity is processed by the brain giving the sensation of depth known as stereopsis.
Identification of altitude profiles in 3D geovisualizations: the role of interaction and spatial abilities
Published in International Journal of Digital Earth, 2019
Petr Kubíček, Čeněk Šašinka, Zdeněk Stachoň, Lukáš Herman, Vojtěch Juřík, Tomáš Urbánek, Jiří Chmelík
3D visualizations contain a number of visual cues. Some static monocular depth cues are embedded in the 3D visualizations (e.g. lighting and shading cues, occlusion/interposition, relative size of objects, linear perspective, texture gradients, and aerial perspective). Dynamic monocular depth cues, which are considered extremely effective for depth information recognition (Loomis and Eby 1988), include motion parallax, i.e. movement of objects, the viewer, or both in the scene; in cartography also referred to as kinetic depth (Willett et al. 2015). Such mutual movement can indicate depth information in the viewer’'s visual field through the perception of different distance changes between the viewer and the objects. In some specific visualization environments, we can also use binocular depth cues, which are represented by binocular disparity, in which case we can see a slightly different view of the scene with each eye separately. The result is synthesized in our brain, creating a 3D perception of the scene. Binocular disparity and convergence are usually provided by 3D glasses or another stereoscopic technology such as head-mounted displays.
Developing an autonomous psychological behaviour of virtual user to atypical architectural geometry
Published in Building Research & Information, 2021
Humans rely on their visual perception in order to make sense of the environment surrounding them (e.g. to recognize objects and also take actions in a three-dimensional space). It is not easy for humans to perceive the distance of the three-dimensional space because the image reflected on our retinas is two-dimensional. We use several monocular (or pictorial) and binocular (or stereoscopic) cues to solve this problem to reconstruct three-dimensional scenes. Monocular depth perception is qualitatively different from binocular depth perception. Binocular depth perception is based on the perceived differences between the images projected on the left and right retinas, also known as binocular disparity. Since the size of our eyes and the distance between eyes is restricted, binocular depth cue is powerful for the near distance but it is not useful for the perception of distance. To compensate for this disadvantage, we depend on monocular cues that include relative height, motion parallax, texture, gradient, occlusion, familiar and relative size, perspective convergence and shadows to perceive distance. Although both monocular and binocular cues provide information about the perception of three-dimensional objects or distances, and though this information, in turn, is used to guide our actions in the environment, the perception of distance is not accurate. The traditional view of perception holds that we use both monocular and binocular cues to represent three-dimensional space internally. According to this view, the success of our actions depends on how accurately we can represent these internal spatial models. However, it has been found that our internal representations do not always accurately correspond to the actual visual environment. Our perception of visual space is often distorted and the perceived three-dimensional metric (or Euclidean) shape is ambiguous (Lee et al., 2013). Nonetheless, taking actions are mostly correct in the three-dimensional space. For instance, it is hard to perceive the Euclidean distance from an observer to a cup but there is no problem to reach to grasp that cup. To understand this, Gibson (1979) suggested that we need to consider ecological validity focused on perception in natural contexts.
Desktop versus immersive virtual environments: effects on spatial learning
Published in Spatial Cognition & Computation, 2020
Jiayan Zhao, Tesalee Sensibaugh, Bobby Bodenheimer, Timothy P. McNamara, Alina Nazareth, Nora Newcombe, Meredith Minear, Alexander Klippel
In addition to body-based senses, other potential differences between the iVR system and a low-immersion desktop computer lie in visual experiences on the HMD and 2D desktop screen. For example, there are variations in display quality and resolution (sometimes similar but often higher with desktop screen), physical field of view (larger for HMD),5The physical field of view is set physically by the actual size of the display and viewing distance of the user. and the availability of binocular depth cues (HMD only; Riecke et al., 2010). Specifically, for depth perception, conventional 2D computer screens do not provide stereopsis, and users must rely on monocular perception cues such as linear perspective, occlusion, texture gradients and motion parallax6Motion parallax is a type of visual depth cue in which objects that are closer appear to move faster than objects that are farther away. to extract depth information (Sakata, Grove, Hill, Watson & Stevenson, 2017). In contrast, stereoscopic HMDs can provide rich monocular depth cues available on 2D computer screens and binocular disparity (McIntire & Liggett, 2014); the latter requires an individual image per eye and allows for the use of stereoscopic depth cues. According to previous studies, binocular stereopsis and monocular motion parallax are considered most important depth cues for building appropriate distance perception in complex environments (Gerig et al., 2018; Mikkola, Boev & Gotchev, 2010; Nawrot, 2003). Considering the essential role of distance relationships between landmarks in developing survey knowledge (according to the landmark-route-survey framework; Siegel & White, 1975), it is possible that iVR users would show better spatial learning performance than desktop users. However, it is important to note that the binocular vision is less effective as a depth cue for very long distances (about 30 meters or longer; Rousset, Bourdin, Goulon, Monnoyer & Vercher, 2018), because binocular disparity decreases with distance from the observer to the observed object (Rousset et al., 2018; Willemsen, Gooch, Thompson & Creem-regehr, 2008). Thus, the relative advantages of iVR over desktop computers in producing depth perception are not uniform across different size VEs.