Explore chapters and articles related to this topic
Virtual Reality in Robotic Neurorehabilitation
Published in Christopher M. Hayre, Dave J. Muller, Marcia J. Scherer, Virtual Reality in Health and Rehabilitation, 2020
Nicolas Wenk, Karin A. Buetler, Laura Marchal-Crespo
Immersive virtual reality HMDs immerse participants in a 3D computer-generated VE, providing the illusion of being inside the VE with the ability to interact with its elements – a sensation commonly referred to as “presence” (Sanchez-Vives and Slater, 2005). The current off-the-shelf immersive VR HMDs (e.g., Oculus, Facebook, USA; HTC Vive, HTC, Taiwan & Valve, USA) are well accepted by healthy young and older users, and no association with cybersickness, i.e., motion sickness symptoms developed when being in VR (LaViola, 2000), has been reported (Huygelier et al., 2019). Although immersive VR HMDs also suffer from the vergence-accommodation conflict, there is neither perspective shift nor focal rivalry as they show a purely virtual environment. Commercial immersive VR HMDs offer an (almost) total audiovisual immersion in the VE. Although this immersion might have the drawback of “disconnecting” patients from their body and/or the therapists, it gathers all of the previously identified potential benefits of VR (impact on patients' motivation and engagement, adaptation of the task/environment to patients' needs, and provision of visual feedback) and reduce possible incongruences between the perception of real and virtual objects, facilitating the transfer of acquired skills (Levac et al., 2019a).
Cloud VR Overview
Published in Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian, Cloud VR, 2020
Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian
Vergence-accommodation conflict is also known as focusing conflict. In principle, the crystalline lens needs to get thicker or thinner to project images precisely on the retina for objects at different distances. Yet this change cannot be achieved in VR headsets, as they cannot provide the desired depth perception for human eyes to view the world in 3D. Specifically, the light rays emitted by the headset screen do not contain the depth cues of an object, and so the eye stays focused on the screen. The crystalline lens remains unchanged even if objects are at different distances. This results in a conflict, and often dizziness, which may be reduced by improving image quality, providing 3D surround sound, reducing MTP latency, and giving force feedback.
Progress of display performances: AR, VR, QLED, OLED, and TFT
Published in Journal of Information Display, 2019
Ho Jin Jang, Jun Yeob Lee, Jeonghun Kwak, Dukho Lee, Jae-Hyeung Park, Byoungho Lee, Yong Young Noh
Focal cue is also an important aspect of a NED. As the AR and VR NEDs are usually binocular, it is trivial to give a depth sensation by presenting stereoscopic images with binocular parallax. Most commercialized AR and VR NEDs, however, provide only binocular parallax, without a corresponding focal cue for each eye. The absence of a correct focal cue causes a mismatch between the depth perceived by the brain and the depth focused on by the individual eye. This is called ‘vergence accommodation conflict (VAC)’ and is considered a major cause of eye fatigue in the extended AR and VR viewing experience. In AR, this is a more important issue as a displayed image should give the same focal cue as the associated real object for realistic optical matching. To present a correct focal cue, various techniques are currently actively being researched on, including holographic, light field, multifocal, and varifocal displays. Such techniques can form images either in continuous depths giving true and exact focal cues or in a few discrete depth planes managing the focal cue error within the zone of comfort shown in Figure 3 [18]. Maxwellian NEDs, which partially alleviate the VAC by removing the focal cue completely, have also been attracting the attention of late.
Full-color computational holographic near-eye display
Published in Journal of Information Display, 2019
Seyedmahdi Kazempourradi, Erdem Ulusoy, Hakan Urey
In ray-tracing-based methods, CGHs are computed by processing 3D scenes with computer graphics methods. In particular, for each viewpoint, computer graphics techniques are utilized to determine perspective views including occlusion effects in a fast and efficient manner. Unlike point cloud and polygon approaches, this approach requires neither hidden surface removal nor shading processing because both are automatically performed by the CG technique. Moreover, this approach readily manages natural 3D scenes and 3D computer graphics. However, in most implementations, the final CGH is formed merely as a collection of the 2D perspective views, losing the accommodation cue. Holographic stereograms and integral imaging based displays are well-known implementations of this type. Holographic stereograms are composites of elemental holograms generated from 2D images by FFT [35]. The computational cost of holographic stereograms is much lower than that of the point cloud and polygon approaches. On the other hand, although an observer can recognize a 3-D image reconstructed from elemental holograms, the quality of deep depth images is problematic. This is because of the fact that the reconstructions are essentially 2D images. To improve this method, researchers have proposed phase-added stereogram and accurate phase-added stereogram approaches [36,37]. Computational costs of original and modified holographic stereogram are almost identical. In such displays, even though binocular disparity and motion parallax cues may be delivered correctly along with appropriate occlusion effects, all objects appear sharp only when the viewer focuses at a single depth (usually the hologram plane), and apparently, out of plane objects appear blurred when focused on. As a result, such implementations are prone to the well-known vergence-accommodation conflict typical of stereoscopic displays. A method for computing realistic CGHs of 3D objects is presented, where the rendered view and the depth-map of the scene are used to handle occlusion, shading and parallax effects and avoid VAC [38].