Explore chapters and articles related to this topic
Cloud VR Terminals
Published in Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian, Cloud VR, 2020
Huaping Xiong, Dawei Li, Kun Huang, Mu Xu, Yin Huang, Lingling Xu, Jianfei Lai, Shengjun Qian
Eye tracking technology not only introduces more natural interactions without the need for controllers, but can also be used for foveated rendering to render display images where the user is looking in high definition and render peripheral images in lower resolution. Foveated rendering can reduce pressure on the GPU and reduce rendering latency. The resulting frame rate increase could also reduce dizziness. This technology can be used in the cloud VR solution to reduce the pressure on cloud rendering and transmission bandwidth.
Measuring Visual Fatigue and Cognitive Load via Eye Tracking while Learning with Virtual Reality Head-Mounted Displays: A Review
Published in International Journal of Human–Computer Interaction, 2022
Alexis D. Souchet, Stéphanie Philippe, Domitile Lourdeaux, Laure Leroy
Eye tracking is implemented in some HMDs available on the market (Clay et al., 2019). It can be used to monitor the user’s visual system for interacting in VR (Luro & Sundstedt, 2019), rendering optimization (NVIDIA Foveated rendering) (Patney et al., 2016), and as an assessment tool of human’s physiological state (Charles & Nixon, 2019; Skaramagkas et al., 2021). Eye tracking is also considered to monitor players’ behavior in VR (Soler-Dominguez et al., 2017). While learning in VR, eye tracking is considered a psycho-physiological assessment tool to replace users’ declarations to investigate learning experience (Soler et al., 2017). Eye tracking is also considered to monitor learners’ cognitive state (Sonntag et al., 2015), learning curves (Lallé et al., 2015), cognitive load, and visual fatigue (Abdulin et al., 2016; Abdulin & Komogortsev, 2015; Park & Mun, 2015). According to Y. Wang et al. (2018), eye tracking can replace or complete clinical optometric measures. Using eye tracking emerges as a viable solution to measure visual fatigue when using an HMD (Abdulin & Komogortsev, 2015).
Measuring user preferences and behaviour in a topographic immersive virtual environment (TopoIVE) of 2D and 3D urban topographic data
Published in International Journal of Digital Earth, 2021
Łukasz Halik, Alexander J. Kent
The hardware used in the experiment comprised a laptop computer (MSI GS63VR 7RF series, Core i7-7700HQ CPU, GeForce GTX 1060 6 GB) to run the TopoIVE, with participants using a FOVE 0 tethered head-mounted display (HMD) together with a Bluetooth remote controller. FOVE 0 was the first HMD to be equipped with eye-tracking technology and foveated rendering, hence the visual display was superior to that offered by other standard tethered HMDs. The controller allowed participants to switch between 2D and 3D modes of representation and to initiate movement with its two-way joystick (forwards-backwards), in the direction of the participant’s viewing direction in the HMD. The experiment was conducted indoors with one participant at a time seated at a desk in a controlled laboratory environment and supervised by two researchers. The laptop computer was placed on the desk in front of the participants, allowing the researchers to see the image displayed on the HMD in real time.
Tunnel vision optimization method for VR flood scenes based on Gaussian blur
Published in International Journal of Digital Earth, 2021
Lin Fu, Jun Zhu, Weilian Li, Qing Zhu, Bingli Xu, Yakun Xie, Yunhao Zhang, Ya Hu, Jingtao Lu, Pei Dang, Jigang You
VR technology has been widely used in games, military applications and disaster investigations (Lele 2013; Bhagat, Liou, and Chang 2016; Li et al. 2017; Lu et al. 2020), and has relevant research on flood disaster visualization (Massaâbi et al. 2018; Sermet and Ibrahim 2019; Šašinka et al. 2019). VR scenes have the characteristics of strong immersion, high-fidelity rendering and natural interaction, which put forward higher requirements on the efficiency of scene rendering. VR scene rendering efficiency is generally expressed by frames per second (fps). The frame rate that is too low will cause the picture to lag behind the interactive action, inducing motion sickness and affecting the user experience (Billen et al. 2008; Han and Kim 2017; Çöltekin et al. 2020). Generally, the minimum frame rate of VR scene drawing is more than 60 frames, and some game VR scenes have higher requirements to achieve better user experience, so VR scenes need to be optimized (Kamel et al. 2017). The existing optimization methods for VR scenes mainly include model simplification based on grid vertex reduction (Dassi et al. 2015; Ozaki, Kyota, and Kanai 2015; Liu 2017), graphics processing unit (GPU) programing optimization (Zhang, Wei, and Xu 2020), level of detail (LOD)-based dynamic scheduling (Gao et al. 2016; Hu et al. 2018b), back culling, view clipping, and occlusion blanking (Chen 2011; Wang 2011; Wu 2013; Geng and Zhu 2014; Lai et al.2016; Robles-Ortega, Ortega, and Feito 2017), among others. Although certain improvements have been achieved using the above methods, these methods are all based on reducing the amount of data or the rendering burden such that the full view range of a scene can still be rendered with high fidelity from the perspective of a computer. Foveated rendering is also a common VR scene optimization method, but it easily causes visual obstacles such as flicker and blur (Duchowski and Çöltekin 2007; Bektaş and Çöltekin 2011; Çöltekin et al. 2020).