Explore chapters and articles related to this topic
Human Perception of Sensor-Fused Imagery
Published in Robert R. Hoffman, Arthur B. Markman, Interpreting Remote Sensing Imagery, 2019
Edward A. Essock, Jason S. McCarley, Michael J. Sinai, J. Kevin DeFord
In this chapter we focus on one type of nonliteral imagery; nighttime imagery obtained by electronic sensors. Typically, these sensors are either infrared detectors or image intensifiers. (The latter sensor, typified by traditional night vision goggles, serves to electronically amplify the effect of the few photons in the nighttime outdoor environment.) In our review, we focus on imagery produced by combining or fusing imagery from multiple electronic sensors, a procedure performed in an attempt to capitalize on the advantages of each type of sensor. We present an overview of the research findings to date that bear on the question of the extent to which the sensors and fusion methods effectively convey spatial information that humans can perceptually organize and use functionally. Our goals in this chapter are to: Explain the need for behavioral assessment of perceptual performance with artificial imagerySummarize the findings of performance assessment to date with nighttime imageryEmphasize the need for much more research that draws upon the expertise of the sensor engineers and scientists, on the one hand, and human factors psychologists and psychophysicists, on the other
Digital Image Processing and Visual Perception
Published in Scott E. Umbaugh, Digital Image Processing and Analysis, 2017
Since the HVS sees in a manner that is dependent on the sensors, most cameras and display devices are designed with sensors that mimic the HVS response. With this type of design, metamers, which look the same to us, will also appear the same to the camera. However, we can design a camera that will distinguish between metamers, something that our visual system cannot do, by proper specification of the response function of the sensors. This is one example that illustrates that a computer vision system can be designed that has capabilities beyond the HVS—we can even design a machine vision system to “see” x-rays or gamma rays. We can also design a system to allow us to see in the infrared band—this is what night vision goggles do. The strength of the HVS system lies not with the sensors, but with the intricate complexity of the human brain. Even though some aspects of manufactured vision systems, for example, cameras, may exceed the some of the capabilities of our visual system, it will be some time before we have developed a machine vision system to rival the HVS.
Detection Technology
Published in Rick Houghton, William Bennett, Emergency Characterization of Unknown Materials, 2020
Rick Houghton, William Bennett
Infrared radiation (IR) is energy that has longer wavelength, lower frequency, and less energy than visible light. Infrared radiation increases with temperature. Night vision goggles work by converting heat energy (infrared radiation) into visible light projected from a viewer. A firefighter’s thermographic camera functions similarly by depicting infrared intensity. These devices detect the emission of infrared radiation from an object.
The Role of bi-Directional Graphic Communication in Human-Unmanned Operations
Published in International Journal of Human–Computer Interaction, 2022
Tal Oron-Gilad, Ilit Oppenheim, Yisrael Parmet
Following the keep-it-simple design principle, a picture provides less data than a video. One would expect it to be beneficial over video when there is little to no utility over a still image. Baber et al. (2010) examined whether technologies, such as imaging devices, support decision-making. They found that providing reconnaissance patrols with cameras can supplement other technologies, such as night-vision goggles or binoculars, and allow capturing intelligence by location-based photography in real-time. Their results showed little difference in the time spent patrolling when introducing location-based photography but significant improvement in reporting. Tagged images changed the nature of the reporting and allowed flexibility in interpretation and analysis (Baber et al., 2010).
Role of age and injury mechanism on cervical spine injury tolerance from head contact loading
Published in Traffic Injury Prevention, 2018
Narayan Yoganandan, Sajal Chirvi, Liming Voo, Frank A. Pintar, Anjishnu Banerjee
Like the upright model results, the inverted drop test series (group B tests) from the earlier study indicated that the age of the specimen is a significant covariate. However, it was negatively associated with the risk of injury; that is, increasing force associated with advancing age. This outcome appears counterintuitive. The resulting injuries and their associated mechanisms may have contributed to the difference in the statistical outcomes. A comparison between the forces at all risk levels indicated a greater magnitude of force with the upright than the inverted model, reflecting the differences between the bony in the former versus soft tissue and distractive nature of injuries sustained in the latter setup. As the bending mode may be involved in extension injury mechanisms in the earlier and later inverted experimental models, it is important to consider the moment that may be a better candidate or an interactive component in association with the axial force for injury criteria and tolerance estimations. This may also have applications in the military because operational activities involve the bending mode due to head-supported mass (helmet, night vision goggles, etc.) worn on the human head. Because these populations are healthier and younger, and greater ranges of segmental angular motions are representative of this group, the moment variable may play an important role. Thus, future studies should focus on extracting this variable from impact tests. These are topics for additional research.