Explore chapters and articles related to this topic
Overview of optoelectronic technology in Construction 4.0
Published in Anil Sawhney, Mike Riley, Javier Irizarry, Construction 4.0, 2020
Photogrammetry techniques are heavily founded upon the Structure-from-Motion (SfM) and Dense Multi-View 3D Reconstruction (DMVR) post-processing algorithms. Newly developed photogrammetry applications are now commercially available and more accessible to construction due to a proliferation of with mobile devices used on site and lower costs in comparison to laser scan devices. SfM and DMVR algorithms automate 3D modeling generation from an unordered image collection of the target object from multiple viewpoints (Sturm and Triggs, 1996). The difference in SfM lies in the ‘calibration’ process, where the internal camera geometry parameters, external camera position, and orientation are automatically adjusted using correspondences between the image features (Peters, 2010; Szeliski, 2010). SfM uses a 3D structure of the environment and camera motion within the scene to automate 3D modeling (Szeliski, 2010). Where 3D laser scanning technology is an expensive technology, image processing SfM and DMVR software is cost-effective and efficient (cf. Golparvar-Fard et al., 2011).
Extracting inundation patterns from flood watermarks: the case study of Ayutthaya, Thailand
Published in Vorawit Meesuk, Point Cloud Data Fusion for Enhancing 2D Urban Flood Modelling, 2017
Even though applying fast shutter speed and Vibration-Reduction (VR) functions were cautiously set to stabilise cameras, some blurry photos were still captured due to shock capturing in platform motions and target-object movements. A subsequent removal of blurred photos was undertaken manually. Remaining sharp photos were selected and then ready for reconstructing point cloud using Structure from Motion (SfM) techniques. When these selected photos were converted and stored as a JPEG file format, making use of SfM technique can then be applied to indicate, match, and reconstruct 3D positions of features from consecutive overlapping 2D photos. SfM techniques use a feature detection method to indicate distinct reference points in each photo, using, e.g. the SiftGPU (Wu, 2007) open- source software. SiftGPU was further developed from the original software package SIFT (Lowe, 2004), implemented on an advanced multi-core Graphics Processing Unit (GPU). When these shared corresponding points of these features between overlapping photos were matched, they were then used to calculate extrinsic parameters. These parameters can be used to attempt to recover camera positions from correlative rotations, projections, and transformations of corresponding photo scenes by using 3D geometrical calculations. Bundle Adjustment (BA) method was used to improve accuracies of camera trajectories in calculations. BA can also minimise projection errors of camera tracking. Refined camera positions were next used to calculate sparse 3D point cloud data of target objects or each structure.
Literature Survey on Recent Methods for 2D to 3D Video Conversion
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Raymond Phan, Richard Rzeszutek, Dimitrios Androutsos
Knorr et al. [17] perform a different approach, where not only do they determine stereoscopic video sequences from its monocular counterpart but they also generate multiple views from the stereoscopic generated video sequence. In particular, they first employ structure from motion (SfM) to determine the external and internal parameters of the camera used to capture the scene. After the camera parameters are determined, multiple virtual cameras can be defined for each frame of the original video sequence, using their camera matrices, as well as the rough separation distance between the two eyes. Figure 27.9 illustrates this concept. First, single-view cameras are positioned throughout the scene of interest. If multiple cameras are not available, a single camera can be used. For a single camera, the camera films the scene, and then moves to another position and films the same scene again. However, the drawback with this single camera approach is that the scene has to be relatively static. Regardless, these camera positions are illustrated in the figure as blue shapes. For each original camera position, a corresponding multiview set is generated in order to create virtual camera positions, as illustrated by red shapes. This is done by estimating homographies to temporal neighboring views of the original camera path, where the original path is illustrated in gray shapes. The original camera path is not known a priori, but can be determined by the homographies calculated previously.
Requirements and challenges for infusion of SHM systems within Digital Twin platforms
Published in Structure and Infrastructure Engineering, 2023
Rolando Chacón, Joan R. Casas, Carlos Ramonell, Hector Posada, Irina Stipanovic, Sandra Škarić
3D reconstruction of assets in outdoor spaces is mostly developed using photogrammetry. Complex structures (from isolated buildings to entire neighborhoods) can be created using a monocular camera sensor based on the concept of Structure from Motion (SfM). The SfM method uses a series of 2D images of an object from a specific area taken by a moving monocular camera sensor as input to extract features and produce high quality 3D structures (Agüera-Vega et al., 2018). Market-driven developments with automated SfM process built-in for 3D modelling are available, but in many cases, algorithms and procedures are not disclosed. If many camera sensors are available, multi-view stereo (MVS) algorithms can construct considerably detailed 3D models from images. Input is a huge set of calibrated overlapping images captured from different perspectives, with the ultimate goal to provide implementations of photometric consistency measures, and efficient optimization algorithms. The success of MVS technologies is gaining ground within the AEC sector.
Investigation of proposed countermeasures against coastal erosion and sediment invasion around half-buried floodway
Published in Coastal Engineering Journal, 2020
The first step is the alignment of acquired images. The SfM algorithm comes into play by the detection of image feature points and the reconstruction of their movement along with the series of images. It provides the basic geometry or structure of the scene, through the position of numerous matched features, in addition to camera positions and internal camera calibration parameters. In the second step, a pixel-based dense stereo reconstruction is performed based on the aligned sparse point cloud. Referencing the reconstructed topography is also done on this step by establishing benchmarks located on each of the images and assigning its locations based on a local coordinate system. After the generation of dense point cloud, a digital elevation model based on the reconstructed structure of the scene can be obtained and used for further processing and analysis. From this reconstructed beach morphology, an elevation model of the topography can be derived as shown in Figure 4(b).
Model-based next-best-view planning of terrestrial laser scanner for HVAC facility renovation
Published in Computer-Aided Design and Applications, 2018
Eisuke Wakisaka, Satoshi Kanai, Hiroaki Date
To obtain a priori knowledge of the space to be scanned, an SfM model is generated from multiple photos by using commercial SfM software [7]. SfM is a 3D modeling method that simultaneously constructs a textured 3D model from a large collection of photos and estimates the camera positions and orientations (extrinsic parameters) corresponding to each photo. These extrinsic parameters are used in steps A2 and A5. However, when applying SfM to indoor HVAC facilities, the SfM model tends to be incomplete and includes a few defects such as holes owing to the lack of feature points in the photos. Because false classifications occur when spatial occupancy is classified using this model with defects and a poor scanner position might be obtained, we have rectified these defects in our algorithm, as described in A2.