Explore chapters and articles related to this topic
Saliency Object Detection
Published in Yu-Jin Zhang, A Selection of Image Analysis Techniques, 2023
There are five types of images can be converted from light field data:Micro-lens Image: It records the direction information of the light in the 4-D light field.Sub-aperture image: It is an image taken of light rays passing through a sub-aperture of the main lens. The main lens can be divided into multiple sub-apertures, so the light field sensor can capture multiple sub-aperture images with different viewing angles.Epipolar plane image (EPI): It is a light field image composed of straight lines projected from all points in the 3-D scene space.Focus stack image: It is a series of sharply focused focal stack sequences at different depth planes obtained using the digital refocusing technique of the light field.All-in-focus image: It is an image obtained by fusing all the in-focus regions in multiple focus stack images focused at different depths in which all object points are in sharp focus.
Intra Coding of Plenoptic Images in HEVC
Published in P. C. Thomas, Vishal John Mathai, Geevarghese Titus, Emerging Technologies for Sustainability, 2020
Ashlin George Mathew, Anu Abraham Mathew
With the advent of light field imaging technology, approach towards digital photography took its greatest leaps forward in creating highly detailed reproducible photographic images with different perspective views using a single exposure. A fixed number of pixels retrieves the high spatial resolution light field that confirms the ability to achieve higher dimensional data. Light field (plenoptic) camera proffers the provision for the viewers to shift focus, tilt, vary perspective and depth of field of different objects in a scene. The light rays originating from different objects in a scene at any point in space are sufficient enough to regenerate the scene from any perspective. Unlike the framework of traditional cameras, an array of micro-lenses is inserted between the main lens and the image sensor for recording the angular domain along with spatial information.
Overcoming spatio-angular trade-off in light field acquisition using compressive sensing
Published in Automatika, 2020
Nicol Dlab, Filip Cimermančić, Ivan Ralašić, Damir Seršić
Light field imaging enables a wide variety of post-capture capabilities such as digital refocusing, viewpoint change, virtual aperture and depth sensing [7,8,14]. In order to easily model the post-capture manipulation of the light field, synthetic light field was introduced in [7] inspired by the synthetic sensor plane equation given by: where D is separation between the sensor and aperture, A is the aperture function, and θ is the angle of incidence that ray makes with the film plane [7]. All the post-capture manipulations of the light field are essentially methods that numerically approximate this integral. For example, refocusing is conceptually just a summation of shifted versions of the images that form through pinholes over the entire aperture.
Fabrication technology for light field reconstruction in glasses-free 3D display
Published in Journal of Information Display, 2023
Fengbin Zhou, Wen Qiao, Linsen Chen
With the advent of a 3D world, people have become dissatisfied by the visual perception of 2D imagery. Creating a virtual 3D universe has been man’s dream for 150 years since C. Wheatstone first invented stereoscopy [1]. A virtual 3D image can be reconstructed by discretizing a continuously distributed light field and creating multiple views based on either ray-tracing method or wavefront approach. The term ‘light field' used here either refers to the light ray collection of a 3D scene from the perspective of geometric optics or to the amplitude and phase information of a wavefront in wave optics. With properly arranged views, 3D images with motion parallax and stereo parallax can be visualized by integrating an optical element with a flat panel. Different from other technologies including holographic display and volumetric display, light field 3D display has a compact form factor compatible with portable electronic devices. The function of the optical element in a light field 3D display is to modulate the phase information of light field and generate a finite number of views, while the display panel refreshes the amplitude information. Notably, the modulated phase information of a view in diffractive optics-based 3D display not only refers to the angular deflection, but also includes light intensity distribution, angular separation, and view shape. As illustrated in Figure 1, parallax barriers, lenticular lens arrays, microlens array, directional backlight, multi-layer LCDs and diffractive optical elements (e.g. diffractive gratings, metasurfaces, etc) have been extensively studied for creating multiple views.
Development of a scalable tabletop display using projection-based light field technology
Published in Journal of Information Display, 2021
Hamong Shim, Dongkil Lee, Jongbok Park, Seonkyu Yoon, Hoemin Kim, Kwangsoo Kim, Daerak Heo, Beomjun Kim, Joonku Hahn, Youngmin Kim, Wongun Jang
The last several decades have seen tremendous efforts to develop and realize natural and realistic displays, i.e. tabletop three-dimensional (3D) displays, which are the ideal and ultimate displays). Major researchers and developers were involved in the development of basic 3D displays to resolve vergence-accommodation conflict [1] and to realize multiviews and full parallax. Among these 3D displays, light-field technologies that employ plenoptic functions or four-dimensional functions have recently attracted interest [2–5]. The light field can be described as a set of rays that pass through every point in space. It can be parameterized for computer vision in terms of a plenoptic function L, which is computed as follows: where (Vx, Vy, Vz) corresponds to a point in space, and (Θ, Φ) corresponds to the orientation or direction of a ray depending on the wavelength of λ [6]. In a light-field display, every direction of the rays in every position of the unoccluded space enables easy handling and convenient rendering of new views. Radiant rays in the light-field display are rendered descriptions of the light field that can be projected through each microlens array for 3D images in space. Many two-dimensional (2D) pixels represent radiant image sources, where the origin, direction, and intensity of the light rays are described by the plenoptic function. The light-field display is designed to display 3D content in a manner that appears natural to the human visual system by providing natural and consistent stereo, parallax, and focus cues.