Explore chapters and articles related to this topic
3DTV Technology and Standardization
Published in Hassnaa Moustafa, Sherali Zeadally, Media Networks: Architectures, Applications, and Standards, 2016
With glasses-free systems not being mature enough for 3DTV, special glasses are needed in order to select the appropriate view for each eye. Two technologies exist: ■ Active shutter glasses which are synchronized with a 3DTV set displaying alternatively the left and right views of a video. Active glasses require batteries.■ Passive glasses which use a polarized filter placed on both the screen and the glasses. For example, the current 3DTV can interlace the left and right views in a single image on the screen whereas the filters on the glasses only allow the left eye to see the odd lines and the right eye to see the even lines of the screen. In this case, image resolution is halved when compared with active systems but new systems such as active retarder will attempt to solve this problem.
The Effect of Learning in a Virtual Environment on Explicit and Implicit Memory by Applying a Process Dissociation Procedure
Published in International Journal of Human–Computer Interaction, 2019
Alexandra Voinescu, Daniel David
For each of the three experimental conditions, we had three learning environments. The first environment corresponded to a classical learning context as the stimuli were delivered on a computer screen. For this we used HP Z800 Workstation with a resolution of the display at 1440 × 900 at a refresh rate of 60 Hz. The stimuli were programmed using OpenSesame software (Version 2.9.4; Mathôt, Schreij, & Theeuwes, 2012). For the second environment, we used the same workstation. The virtual environment was presented on the computer desktop with a 3D view. The virtual scenario consisted of a standard apartment scenario developed by EON Reality Inc. (2014) (http://www2.eonexperience.com/eon-models/main.aspx). The original virtual apartment ran on EON Viewer software (EON Viewer 2011, Version 7.6). The virtual apartment was modified according to the research objectives and items/objects were added using EON7 Software Suite, EON Studio software (EON Studio 2010, Version 7.6). The objects had .3Ds or .CAD format. The virtual environment ran on a CAVE Automatic Virtual Environment with four walls. The EON Icube is produced by EON Reality, Inc. (2010) (http://www2.eonexperience.com/eon-models/main.aspx) with the following technical specifications: four walls constructed of acryl screens, fou4 projectors with a resolution of the display at 1400 × 1050 pixels at a refresh rate of 96 Hz with 3000 ANSI Lumens luminosity, stereoscopic 3D active shutter glasses, a NaturalPoint tracking system with 12 tracking cameras, a 360 Microsoft wireless controller for Windows, and a Surround Sound System 500 W.
The acquisition of survey knowledge for local and global landmark configurations under time pressure
Published in Spatial Cognition & Computation, 2019
Sascha Credé, Tyler Thrash, Christoph Hölscher, Sara Irina Fabrikant
This experiment employed a virtual reality system called the CAVE that simulates binocular vision with the stereoscopic synchronization of active shutter glasses. Participants’ head movements were tracked using infrared emitters attached to the shutter glasses and four optical sensors that were mounted to the top corners of the display screens. Ultra-short-throw projectors generated images with 1280 × 800 pixels at 120 Hz frequency on three screens. Each screen was 3120 mm wide and 1950 mm tall. These screens were located in front, to the left, and to the right of the participant. Figure 1 shows the CAVE setup with a participant sitting on a chair that was 30 cm back from the center of the system. The participant’s viewpoint in the CAVE was offset 60 cm above the position of the shutter glasses. Participants navigated through virtual cities at 3.8m/s using a wireless one-handed joystick device (i.e., WorldViz Wand). Physiological measures were recorded with transmitter modules (i.e., BioNomadixx) attached to the participant’s wrist for EDA and head for facial EMG. These modules were wirelessly connected to the MP150 stationary acquisition unit (Biopac System Inc., GA, USA; https://www.biopac.com) via a local area network. At the participant’s hand and face, 15 cm electrode leads connected each transmitter module to 24 mm disposable hydrogel electrodes (Ag-AgCl sensors). The experimental procedure was written in Python and rendered with Vizard 5.6 (WorldViz, Santa Barbara, CA, USA; https://www.worldviz.com). The city models were designed using City Engine 2014 (Esri, Redlands, CA, USA; http://www.esri.com/software/cityengine). Physiological data acquisition and fEMG data analysis was conducted using AcqKnowledge 4.4 (Biopac Systems Inc.). EDA data were analyzed using LedaLab, a Matlab-based software for the analysis of skin conductance data (Benedek & Kaernbach, 2010). Using network data transfer, physiological recordings from AcqKnowledge were synchronized in real-time with the experimental procedure from Vizard.