Explore chapters and articles related to this topic
Machine Vision
Published in Jerry C. Whitaker, Microelectronics, 2018
David A. Kosiba, Rangachar Kasturi
The scene radiance at a point on the surface of an object depends on the reflectance properties of the surface, as well as on the intensity and direction of the illumination sources. For example, for a Lambertian surface illuminated by a point light source, the scene radiance is proportional to the cosine of the angle between the surface normal and the direction of illumination. This relationship between surface orientation and brightness is captured in the reflectance map. In the reflectance map for a given surface and illumination, contours of equal brightness are plotted as a function of surface orientation specified by the gradient space coordinates (p, q), where p and q are the surface gradients in the x and y directions, respectively. A typical reflectance map for a Lambertian surface illuminated by a point source is shown in Fig. 19.2. In this figure, the brightest point corresponds to the surface orientation such that its normal points in the direction of the source. Since image brightness is proportional to scene radiance, a direct relationship exists between the image intensity at an image point and the orientation of the surface of the corresponding scene point. Shape-from-shading algorithms exploit this relationship to recover the three-dimensional object shape. Photometric stereo exploits the same principles to recover the shape of objects from multiple images obtained by illuminating the scene from different directions.
Glossary of Computer Vision Terms
Published in Edward R. Dougherty, Digital Image Processing Methods, 2020
Robert M. Haralick, Linda G. Shapiro
Photometric stereo refers to the capability of determining surface orientation by means of the shading variations present on two or more images taken of the same scene from the same position and orientation but with the light source in different positions.
Automated image segmentation of air voids in hardened concrete surface using photometric stereo method
Published in International Journal of Pavement Engineering, 2022
Jueqiang Tao, Haitao Gong, Feng Wang, Xiaohua Luo, Xin Qiu, Yaxiong Huang
The photometric stereo method estimates the 3D surface of objects based on the relationship between image intensity and the surface normal under various lighting conditions. Compared to conventional air-void detection methods, the photometric stereo method has the key advantage of achieving automation while reducing test time, which is a cost-effective and real access to high-resolution 3D images, easy to be implemented, and robust to be reconstructed on textured or texture-less surfaces. Typically, surface normal reconstruction and surface normal integration are two sequential image processing steps of the photometric stereo method. The research methodology presented in this paper has shown that the automated segmentation of air voids could involve one or both of the two steps, following two different approaches. In addition, shadow and specularity are two kinds of surface corruptions that can be observed on concrete surfaces under various lighting directions. It is important to properly address these surface corruptions during photometric stereo procedures. A flow chart is shown in Figure 1 to describe the computation process. The photometric stereo system utilised in this research is shown in Figure 2. Details of the photometric stereo system setup are described in the ‘Instrument Setup’ section.
A comparative study on the application of SIFT, SURF, BRIEF and ORB for 3D surface reconstruction of electron microscopy images
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2018
Ahmad P. Tafti, Ahmadreza Baghaie, Andrew B. Kirkpatrick, Jessica D. Holz, Heather A. Owen, Roshan M. D’Souza, Zeyun Yu
Generally speaking, 3D reconstruction techniques can be categorises into three major classes: (1) single-view, (2) multi-view and (3) hybrid. In single-view techniques, also known as photometric stereo, an attempt is made for 3D surface reconstruction using a set of 2D images from a single-view point but with varying lighting directions. In this class, after acquiring the input images, the light directions are determined. This is followed by calculating the surface normal and albedo and finally depth estimation. In multi-view class, 3D surface is reconstructed by combining the information gathered from a set of 2D images acquired by changing the imaging view. In such techniques, also known as structure from motion, at first, feature points are detected in the input images. This is followed by finding the corresponding points in the images (point-matching) and then using the projection geometry theory for estimating the camera projection matrices. Finally, 3D points are generated using linear triangulation. As one can imagine, the hybrid class combines the advantages of the single-view and multi-view techniques for a more accurate 3D reconstruction. Here, we put the focus on the multi-view class, especially since for the problem of 3D reconstruction using scanning electron microscope (SEM) images, having image set with varying light directions is not possible, but acquiring images with different titling angles are commonplace. Therefore, having a good understanding of the feature-point detectors and also local descriptors can be beneficial.