Explore chapters and articles related to this topic
Senses in Action
Published in Haydee M. Cuevas, Jonathan Velázquez, Andrew R. Dattel, Human Factors in Practice, 2017
Lauren Reinerman-Jones, Julian Abich, Grace Teo
Display technology is transitioning from flat, two-dimensional (2D) displays to displays that provide three-dimensional (3D) perspective views with gross applications within many domains such as medical, military, educational, air-traffic control, and, of course, entertainment. Dynamic, interactive forms of content display have been shown to significantly increase learning performance over static display forms (i.e., textbook). Through the use of virtual reality, augmented reality, stereoscopic displays, and other technological approaches, 3D environments are readily and easily achieved. The limitation is that such displays usually require some type of viewing device, such as a headset or glasses, to see the 3D items. Additionally, these devices usually require some level of power supply, making portable systems less available or battery life an issue. Therefore, an interest in reduced technologically dependent 3D displays has led to a resurgence of printed holographic imaging. Holographic displays offer the advantage of portability, simultaneous viewing by multiple users, and are less expensive than physical models. Prior to the implementation of these types of displays, an understanding of the effects on human information processing and performance must be established.
A mixed methodology for detailed 3D modeling of architectural heritage
Published in Koen Van Balen, Els Verstrynge, Structural Analysis of Historical Constructions: Anamnesis, Diagnosis, Therapy, Controls, 2016
D. Arce, S. Retamozo, R. Aguilar, B. Castañeda
Laser scanning, shape from structured light, shape from silhouette, shape from video and shape from photometry are among the techniques that have been used over the past years for 3D modeling of archeological heritage (Pavlidis et al. 2007). These methodologies have advantages and disadvantages and their use is determined by the application, budget and required accuracy for the project. Nowadays 3D scanners are becoming a standard source for acquiring data in many applications due to their high precision, but image-based modeling still remains the most complete, economical, portable and flexible approach (Nex et al. 2014). Nevertheless, in real life applications neither can be used to obtain a complete digitized model, so it is necessary to combine their capabilities to complement the data acquired from each one. The application of 3D modeling can be found in several disciplines. In Architecture, it is used to generate digital models from which drawings can be produced (Bryan, et al. 2016). In virtual applications, 3D models can be used for virtual and augmented reality, holograms and virtual tours (Carrozzino et al. 2010). In archeology, the models are applied to generate 3D databases with topologic, topographic, geometric and texture information for heritage documentation and
Literature Survey on Recent Methods for 2D to 3D Video Conversion
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Raymond Phan, Richard Rzeszutek, Dimitrios Androutsos
By presenting these two views of a scene to a human observer, where one is slightly offset from the other, this generates the illusion of depth in an image, thus perceiving an image as 3D. The work is primarily done by the visual cortex in the brain, which processes the left and right views presented to the respective eyes. In the past, the disparities of each point between both viewpoints are collected into one coherent result, commonly known as a disparity map. These are the same size as either of the left and right images. The image is monochromatic (grayscale) in nature, and the brightness of each point is directly proportional to the disparity between the corresponding points. When a point is lighter, this corresponds to a higher disparity, and is closer to the viewer. Similarly, when a point is darker, this corresponds to a lower disparity, and is farther from the viewer. For scenes that have smaller disparities overall, the disparity maps are scaled accordingly, so that white corresponds to the highest disparity.
Effect of Transparency Levels and Real-World Backgrounds on the User Interface in Augmented Reality Environments
Published in International Journal of Human–Computer Interaction, 2023
Muhammad Hussain, Jaehyun Park
Display technologies have successfully progressed from the large cathode ray tube to small flat panel designs, such as liquid crystal display (LCD) and organic light-emitting diode (OLED), since the start of this millennium (Chen et al., 2017). More recently, the next-generation display technologies being developed go beyond flat panels that are only placed in front of users and instead aim to revolutionize how people interact with their surroundings (Cakmakci & Rolland, 2006). One such display is augmented reality, which strives for high-quality see-through performance and enhances the real world by superimposing digital content. As the next generation of interactive displays, augmented reality (AR) and virtual reality (VR) headsets can deliver rich three-dimensional (3D) visual experiences (Cakmakci & Rolland, 2006; Yin et al., 2021; Zhan et al., 2020). Augmented reality will be a revolution in human-computer interaction. Both augmented reality and virtual reality displays can lead to appealing applications in health care, engineering design, education, manufacturing, retail, and entertainment, thanks to their cutting-edge optical technology and novel user experiences (Caudell & Mizell, 1992; Munsinger et al., 2019; Rendon et al., 2012; Wu et al., 2013). AR has been a potent tool for enhancing process flexibility and efficiency in industrial operations (Fraga-Lamas et al., 2018).
Can 3-Dimensional Visualization Enhance Mental Rotation (MR) Ability?: A Systematic Review
Published in International Journal of Human–Computer Interaction, 2023
The phrases “three-dimensional” and “3-D” are used vaguely in the literature. To illustrate, systems that display figures that represent isometric views of objects with a “3D” effect in a 2-D environment are often described as “3D” in the literature. Therefore, Hackett and Proctor (2016) categorizations informed this review. They state that although they vary in their ability to replicate human perception, several display types (monoscopic 3-D, stereoscopic 3-D, augmented and mixed reality, and autostereoscopic displays) are able to present three-dimensional content. Monoscopic 3-D displays comprise computer screens while stereoscopic 3-D display is achieved with 3-D glasses. While AR and MR displays have various forms of mobile and HMD-based setups, autostereoscopic displays present holographic content without the aid of glasses or HMDs. This review included research that focused only on truly “3D” environments that employ 3-D models that can be viewed from all angles and thus allow 3-D spatial perception. In this respect, the abstracts of the selected works were screened based on dimensionality. For the remaining works, the dimensionality of the training environment was inferred from the body of the articles and inclusion decisions were made accordingly. Although they offer a lesser degree of immersion, computer-based interventions (i.e., monoscopic 3-D displays) that employ 3-D visualization were also included in the review.
A novel teaching and training system for industrial applications based on augmented reality
Published in Journal of the Chinese Institute of Engineers, 2020
Kuu-Young Young, Shu-Ling Cheng, Chun-Hsu Ko, Yu-Hsuan Su, Qiao-Fei Liu
Among researches regarding teaching and training system design, the methods of lead-through (Karwowski and Rahimi 1992; Choi et al. 2013) and human demonstration (Tang et al. 2016) have been proposed for task teaching, which has led to more intuitive teaching and training. Virtual reality (VR) and augmented reality (AR) techniques have also been introduced to provide the user with a sense of visual immersion (Hashimoto et al. 2013; Fang, Ong, and Nee 2014; Frank, Moorhead, and Kapila 2016; Malý, Sedláček, and Leitão 2016; Su and Young 2018; Whitney et al. 2018). Interfaces using AR and mixed-reality have also been designed for robot motion intent communication (Walker et al. 2018; Rosen et al. 2019). Meanwhile, systems involving 3D VR and AR have also been proposed, accompanied by 3D-display devices, such as a head-mounted display (Rodehutskors, Schwarz, and Behnke 2015; Krichenbauer et al. 2018; Su et al. 2019). Force feedback (Gibo, Bastian, and Okamura 2014) and physical-based simulation (Kaigom and Roßmann 2016) have been used to yield realistic emulation. Additionally, a touchscreen has been employed, allowing the user to interact with an actuated object through multi-touch gestures (Kasahara et al. 2013). Furthermore, interfaces based on biological signals, such as EMG (Liu and Young 2012), have also been developed.