Explore chapters and articles related to this topic
Freeform Optics/Nonimaging: An Introduction
Published in Tuan Anh Nguyen, Ram K. Gupta, Nanotechnology for Light Pollution Reduction, 2023
Nonimaging optics works on two design principles. That is, the concentration of solar energy (maximizing the energy quantity to the receiver, usually a solar cell or thermal receiver) and illumination (controlling the distribution of light in some areas and completely isolated from other areas) [28,29]. This illumination nonimaging plays a major role in reducing light pollution [30]. Common applications for nonimaging optical systems include many areas of lighting engineering. The current utilization of nonimaging optical designs includes automotive headlights, LCD, illuminated instrument panel displays, backlights, LED lights, optical fiber lighting devices, projection display systems, and luminaires [31–38]. Especially in lighting applications, it is to be focused on the intensity distribution. Nonimaging optics provides precise control of light, providing high quality and uniform lighting for LED applications. Designing an optical system for using LEDs as a high-intensity light source for projectors is more difficult than using a compact arc lamp. The high-brightness LED projection screen has improved the efficiency of the light engine by using nonimaging components that match the shape and emission pattern of the LED [39]. Freeform optics has been used to increase the angular color uniformity of a white LED and obtain a large angular range of uniform illumination. Currently, LEDs are also used in street lighting equipment to reduce light pollution. Freeform lenses are optimized to produce a controlled brightness distribution on street and enhance the brightness uniformity of the surface [40]. Freeform surface reduces the light pollution by shielding the light source to minimize glare and light trespass, and by reducing skyglow by optimizing the light for a confined area.
Haptic Interface
Published in Julie A. Jacko, The Human–Computer Interaction Handbook, 2012
The second reason why some people fail to perceive the sensation is related to a combination of visual and haptic displays. A visual image is usually combined with a haptic interface by using a conventional cathode-ray tube or projection screen. Thus, the user receives visual and haptic sensations through different displays and has to integrate the visual and haptic images in his or her brain. Some users, especially elderly people, face difficulty in this integration process.
Perceptual Attention to Contact Analogue Head-Up Displays
Published in Y. Ian Noy, Ergonomics and Safety of Intelligent Driver Interfaces, 2020
For each pair of HUD or screen primary task blocks, one of the pair was presented with the HUD image on a focal plane that was coplanar with the projection screen. For the other pair, the HUD was focused at an intermediate distance of 2.5 m from the subject.
M-AR: A Visual Representation of Manual Operation Precision in AR Assembly
Published in International Journal of Human–Computer Interaction, 2021
Zhuo Wang, Xiaoliang Bai, Shusheng Zhang, Weiping He, Yang Wang, Dechuan Han, Sili Wei, Bingzhao Wei, Chengkun Chen
The prototype system combines the following three elements: (1) server; (2) client; (3) data communication module. As shown in Figure 9, a server is the core of visual data and assembly instructions. Client can access server and obtain the corresponding resource data. Our team uses the Intel NUC7i7BNH microcomputer as the hardware platform for assembly resources. It uses Intel Ceroi7 7567 U 3.5 GHz, 6 G RAM, Intel GMA HD 650 graphics card, 32 g DDR4 2133 MHz and Windows 10 professional 64 bit operating system. Client connects a projector and a industrial camera to PC, and uses the projection content to guide the user’s operation. Our team chose the Dell Alienware 17 (alw17c-d2758) laptop as the client. It uses Intel Corei7 7700hq 2.8 GhZ, 8 g RAM, NVIDIA Geforce GTX 1070 graphics card, 16 g DDR4 2667 MHz and Windows 10 professional 64 bit operating system. Our team chose an industrial camera with an 8 to 50 mm long zoom lens. Resolution: 5 million, maximum frame rate: 60fps, video interface type: USB3.0, horizontal viewing angle: 6.3–37.5 degrees. Our team chose the VPL-DX271 projector. The projection screen size is 40–300 inches, the brightness (lumen) is 3600, the standard resolution is 1600 × 1200 DPI, and the display technology is 3LCD. “C-S” platform is used to transfer resources between server and client. Under this platform, the server sends visual resources and assembly instructions to client through WiFi, and client integrates the received resources to guide the user’s micro-operation. The user’s operation behavior is recorded by a industrial camera and fed back to server, which will make statistical analysis on the data.
The development of perceptual-cognitive skills in youth volleyball players
Published in Journal of Sports Sciences, 2021
Silke De Waelle, Griet Warlop, Matthieu Lenoir, Simon J. Bennett, Frederik J.A. Deconinck
The video clips were back projected, using an LED video projector (LG PH550G, Seoul, South Korea) with HD resolution onto a 1.07 m (w) x 0.6 m (l) projection screen. The projector was placed 1.5 m from the screen on a table, while the subjects were placed behind the table at 2.00 m from the screen (see Figure 6). To facilitate immersion in the volleyball game that was displayed, participants would be standing up for the anticipation and decision-making tests. However, to enable easy writing in the pattern recall test, participants were seated at the table for that test. The participants’ responses for the anticipation and decision-making tests were recorded using a standard Dell keyboard with a wired USB connection. OpenSesame software was used to display the videos and record the participants responses (Mathôt et al., 2012). This software is designed specifically for behavioural experiments and allows for efficient stimulus presentation with sub-millisecond timing.2All standard keyboards are subject to timing lag. According to Damian (2010) the average lag caused by keyboards in scientific experiments is around 30 ms, and Damian (2010) concluded that the variation in human performance is considerably larger than any variation due to response device imprecision. Software packages can also cause lag, and for OpenSesame, Bridges et al. (2020) showed that, for onset of visual stimuli and response times, timing lag in OpenSesame is minimal (3.85 ± 0.7 ms for visual onset and 8.27 ± 1.22 ms for response time measurement).