Explore chapters and articles related to this topic
Introduction to Computer Vision and Basic Concepts of Image Formation
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
This chapter gives an overview of the fundamental concept of image formation in a single camera-based setup. For image acquisition, a real world scene is projected onto a two-dimensional image plane. Projection of three-dimensional scenery to two-dimensional image will lose depth information of a real three-dimensional scene. Reconstruction of the original three-dimensional scene can be done using disparity information. To address this point, geometry of image formation in a stereo vision setup is described in this chapter. In a stereo vision setup, depth information of a scene can be obtained by estimating a disparity map from the two stereo images. The major challenges of obtaining an accurate disparity map are also briefly discussed in this chapter. Finally, the concept of image reconstruction by Radon transformation is discussed.
Rules of Thumb
Published in Tamara Munzner, Visualization Analysis and Design, 2014
not surprising when considered mathematically, because as shown in Figure 6.2 the length of a line that extends into the scene is scaled nonlinearly in depth, whereas a line that traverses the picture plane horizontally or vertically is scaled linearly, so distances and angles are distorted [St. John et al. 01]. Considered perceptually, the inaccuracy of depth judgements is also not surprising; the common intuition that we experience the world in 3D is misleading. We do not really live in 3D, or even 2.5D: to quote Colin Ware, we see in 2.05D [Ware 08]. That is, most of the visual information that we have is about a two-dimensional image plane, as defined below, whereas the information that we have about a third depth dimension is only a tiny additional fraction beyond it. The number of 0.05 is chosen somewhat arbitrarily to represent this tiny fraction. Consider what we see when we look out at the world along a ray from some fixed viewpoint, as in Figure 6.2(a). There is a major difference between the towardaway depth axis and the other two axes, sideways and updown. There are millions of rays that we can see along these two axes by simply moving our eyes, to get information about the nearest opaque object. This information is like a two-dimensional picture, often called the image plane. In contrast, we can only get information at one point along the depth axis for each ray away from us toward the world, as in Figure 6.2(b). This phenomenon is called line-of-sight ambiguity [St. John et al. 01]. In order to get more information about what is hidden behind the closest objects shown in the image plane, we
Nonlinear analysis of interferometric moiré fringes
Published in C A Walker, Handbook of Moiré Measurement, 2003
Consider the interferometric moiré configuration shown in Figure 2.1.1. This figure shows a pair of laser beams diffracting from a grating on the specimen surface. To measure all three surface strain components, two pairs of laser beams are needed, one in the y–z plane, as shown, and one in the x–z plane, which diffract from a pair of crossed gratings on the specimen surface. In general the two pairs of beams need only lie in different planes, but a 90° angle between the planes is optimum. This figure also shows diffracted beams, with diffraction orders +1 and −1, imaged with a lens system (shown as a single lens for simplicity) on to an image plane. The image plane can either be photographic film or an electronic sensing device such as a CCD video camera.
Depth-enhanced 2D/3D switchable integral imaging display by using n-layer focusing control units
Published in Liquid Crystals, 2022
Qiang Li, Fei-Yan Zhong, Huan Deng, Wei He
Light rays emitted from displayed pixels are focused by a lens array onto an image plane, called central depth plane (CDP). When far away from the CDP, the 3D image points gradually spread into a diffused spot, resulting in a gradual blurring of the 3D image. Due to the limited resolution of the human eye, the defocused 3D image within a certain range near the CDP will still be considered clear. The range to see a clear 3D image is defined as the DOF of the integral imaging. The position of the CDP depends on the focal length f and the gap g between the 2D display screen and the lens array. Since the focal length f and the gap g are fixed for a given 3D display device, the DOF of an integral imaging system is usually very narrow. There are several ways to increase the DOF and break the limitation. One way is to change the gap g. The early method requires to mechanically move the lens array to change the position of the CDP [16]. But when mechanical movements are involved, the elemental image array and the lens array cannot match perfectly resulting in the degradation of image quality. Some methods of presenting two or three CDPs using stepped lens array [17], two-layer lens array [18], and uniaxial crystal plate [19] have been reported, which can also increase the DOF to a certain extent. The other way is to change the focal length f. For example, interweaving lenses with different focal lengths into the lens arrays [20], or a single lens with multiple focal lengths [21] or recording two sets of lens arrays into a holographic optical element [22] have been reported. However, the improvement of these methods is still limited. Recently, tunable optical elements have been explored to incorporate into the integral imaging system, such as liquid lens [23,24] and electronically controlled liquid crystal lens [25] which can continuously adjust the position of CDP. However, there are still deficiencies in optical resolution and dynamic refresh rate. In addition to these, some spatial multiplexing methods by using beam splitter [26] and dihedral mirrors [27] can also improve the DOF effectively. But the volumes are increased dramatically. Moreover, none of these methods is suitable for 2D/3D switchable displays.