Explore chapters and articles related to this topic
Real-time 3D reconstruction using SLAM for building construction
Published in Konstantinos Papadikis, Chee S. Chin, Isaac Galobardes, Guobin Gong, Fangyu Guo, Sustainable Buildings and Structures: Building a Sustainable Tomorrow, 2019
3D reconstruction is a process of capturing the geometry and appearance of an object. Point cloud models, mesh models and geometric models are three main models to represent reconstructed models, where the point cloud models are the basis. Point cloud is a set of data points including XYZ coordinates data and color data, which form the external surface of an object. 3D reconstruction is widely used in construction site, for example, 3D reconstruction of as-built model in construction site can be applied in construction progress tracking, geometry quality inspection, construction safety management etc. (Wang et al. 2019). There are two main 3D reconstruction methods to obtain 3D point cloud models, which are the image-based ranging algorithms method and the laser scanner-based method. Image-based ranging algorithms processes 2D images to get 3D point cloud model, while 3D laser scanner directly generates point cloud model by depth information based on round trip time of the laser beam/ray. Both two methods can only generate 3D point clouds off-line. Off-line generating point cloud models causes delay of detecting errors in the collecting data step. For example, engineers generate a model and find that there are some missing parts. They have to recollect the data of the missing parts. Therefore, it is better to generate a point cloud model during the data collecting step. In the present paper, we propose a real-time 3D reconstruction system by using simultaneous localization and mapping (SLAM) and evaluate this system with the image-based algorithm SfM and laser scanner.
Reconstruction of 3D model and performance analysis of high strength steel wire with corrosion pit
Published in Joan-Ramon Casas, Dan M. Frangopol, Jose Turmo, Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability, 2022
T.Z. Cheng, Y. Pan, Y.Q. Dong, D.L. Wang, D.L. Wang, Y. Pan
3D reconstruction technology refers to a technology that restores the contour, texture, structure and other 3D information of 3D objects in the real world in the computer environment based on the sampled data source, and converts the real scene into a mathematical model in line with the logical expression of the computer.
External anatomical shapes reconstruction from turntable image sequences using a single off-the-shelf camera
Published in João Manuel, R. S. Tavares, R. M. Natal Jorge, Computational Modelling of Objects Represented in Images, 2018
Teresa C.S. Azevedo, João Manuel, R.S. Tavares, Mário A.P. Vaz
3D imaging of physical objects has become a strong topic of research, mostly due to the increasing improvements in computational resources. Particularly, 3D reconstruction of the human body promise to open a very wide variety of applications: medicine, virtual reality, ergonomics applications, etc.
Automated UAV path-planning for high-quality photogrammetric 3D bridge reconstruction
Published in Structure and Infrastructure Engineering, 2022
Feng Wang, Yang Zou, Enrique del Rey Castillo, Youliang Ding, Zhao Xu, Hanwei Zhao, James B.P. Lim
There have been numerous open-source software packages (e.g., openMVG, openMVS) and commercial software packages (e.g., Agisoft Metashape, Pix4D Mapper, ContextCapture) for photogrammetric 3D reconstruction of objects. Aicardi et al. (2016) compared the performance of different 3D modelling software packages and found that the model generated by ContextCapture showed the largest point density and the lowest standard deviation. Therefore, the ContextCapture (https://www.bentley.com/en/products/product-line/reality-modeling-software/contextcapture-center) was selected for 3D bridge reconstruction in this study. Processing the UAV images to generate a 3D model using ContextCapture includes three simple steps. The first step is to create a block and import all the UAV images for a preview of the spatial positions and orientations of the images. The second step is to submit the images for aerotriangulation to generate a sparse point cloud model. The final step is to submit for the final creation of the high-density 3D reality model.
Condition assessment of ship structure using robot assisted 3D-reconstruction
Published in Ship Technology Research, 2021
Faisal Mehmood Shah, Tomaso Gaggero, Marco Gaiotti, Cesare Mario Rizzo
Among them, in (Snavely et al. 2010) Authors applied 3D reconstruction for real world objects, by using bundle adjustment, feature point matching and other technologies. As new computer GPUs, multicore and hardware technologies are constantly emerging, SFM algorithm is now used in various fields, with the following pipeline: Feature Extraction and image MatchingCamera position Estimation3D point generationBundle Adjustment
A comparative study on the application of SIFT, SURF, BRIEF and ORB for 3D surface reconstruction of electron microscopy images
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2018
Ahmad P. Tafti, Ahmadreza Baghaie, Andrew B. Kirkpatrick, Jessica D. Holz, Heather A. Owen, Roshan M. D’Souza, Zeyun Yu
Generally speaking, 3D reconstruction techniques can be categorises into three major classes: (1) single-view, (2) multi-view and (3) hybrid. In single-view techniques, also known as photometric stereo, an attempt is made for 3D surface reconstruction using a set of 2D images from a single-view point but with varying lighting directions. In this class, after acquiring the input images, the light directions are determined. This is followed by calculating the surface normal and albedo and finally depth estimation. In multi-view class, 3D surface is reconstructed by combining the information gathered from a set of 2D images acquired by changing the imaging view. In such techniques, also known as structure from motion, at first, feature points are detected in the input images. This is followed by finding the corresponding points in the images (point-matching) and then using the projection geometry theory for estimating the camera projection matrices. Finally, 3D points are generated using linear triangulation. As one can imagine, the hybrid class combines the advantages of the single-view and multi-view techniques for a more accurate 3D reconstruction. Here, we put the focus on the multi-view class, especially since for the problem of 3D reconstruction using scanning electron microscope (SEM) images, having image set with varying light directions is not possible, but acquiring images with different titling angles are commonplace. Therefore, having a good understanding of the feature-point detectors and also local descriptors can be beneficial.