Explore chapters and articles related to this topic
2+ Imaging
Published in Francesco S. Pavone, Shy Shoham, Handbook of Neurophotonics, 2020
Tobias Nöbauer, Alipasha Vaziri
First, a microscope is object-space telecentric: a stop in a plane conjugate to the back focal plane of the objective – usually physically located inside the microscope objective – restricts the angular spread of accepted rays to a well-defined range that is independent of lateral position in the field of view (FOV), and symmetric about the optical axis. Thereby a microscope produces orthographic views of the scene, and not perspective views, as in a standard, non-telecentric photographic camera. If the microscope uses a so-called infinity-corrected objective and forms the image using a tube lens (as is the case in practically all modern standard microscopes), it is also image-space telecentric. In effect, the sensor pixels with the same position relative to the nearest microlens (that is, the same (u,v) indices) all capture the same range of ray angles emanating from object space, irrespective of the microlens coordinate (s,t), which is not the case in the light-field camera configuration discussed above.
Introduction to Hyperspectral Satellites
Published in Shen-En Qian, Hyperspectral Satellites and System Design, 2020
Figure 1.19c shows another implementation using a light field camera with a filter array. A lenslet array is placed at the focal plane of the objective lens, and the detector array lies at a pupil plane as imaged by the lenslets. The image behind each lenslet is an image filtered by the filter array and modulated by the scene's average spectral distribution across the lenslet. This implementation has an advantage of being able to use a variety of objective lenses, so that zooming, refocusing, and changing focal lengths are easier to achieve at the cost of more complex and less compact than the other two implementations (Levoy et al. 2006, Horstmeyer et al. 2009).
Hologram calculation
Published in Tomoyoshi Shimobaba, Tomoyoshi Ito, Computer Holography, 2019
Tomoyoshi Shimobaba, Tomoyoshi Ito
Methods that can reproduce deep 3D scenes by combining the multi-view image approach with other methods have been proposed. For example, Refs. [78, 79] combine a set of sub-images and depth images, and generate a hologram with the point cloud method without using FFT. In addition, Refs. [80, 81] combine the RGB-D image approach described in the next section and the multi-view image approach to create a hologram. Using integral photography, we can convert a live-action 3D scene into a hologram. Other methods using a light field camera have also been proposed [82, 83].
Overcoming spatio-angular trade-off in light field acquisition using compressive sensing
Published in Automatika, 2020
Nicol Dlab, Filip Cimermančić, Ivan Ralašić, Damir Seršić
There are two main approaches for improving the spatial and angular resolution of the light field. Hybrid light field imaging systems like the ones proposed in [9,10] consist of a conventional high-resolution camera and a light field camera. In such imaging systems, resolution of the individual light field sub-aperture is enhanced using the information from the conventional camera image. Recently, deep learning methods for light field super-resolution were proposed [11–13]. These methods learn underlying statistical distribution of a training dataset consisting of different light field examples. After the training, such methods are able to ‘hallucinate’ higher resolution details in the angular and spatial domain of the light fields. Contrary to the previously mentioned hybrid methods, deep learning methods for light field super-resolution are not based on physical measurements.