Explore chapters and articles related to this topic
Measuring traffic-induced loads and 3D bridge displacements with UAVs
Published in Hiroshi Yokota, Dan M. Frangopol, Bridge Maintenance, Safety, Management, Life-Cycle Sustainability and Innovations, 2021
An Intel RealSense Depth Camera was used. This camera was initially developed to measure and track hand motions to interact with computers within virtual environments with virtual reality head-mounted displays. The camera casts an infrared laser that projects a virtual speckle pattern onto an object. In addition to an RGB camera, two infrared cameras are attached to the sensor which records the reflection of the laser pattern. The camera’s “camera matrix” which consists of the intrinsic and extrinsic information is known and constant. Knowing the “camera matrix,” the speckle pattern of the infrared laser is tracked and measured. Using Intel’s RealSense SDK 2.0, an algorithm was developed which measured the distance from an object to the sensor and the power spectral density of the obtained time history of displacements was estimated using a Fast Fourier Transform. After adjusting the camera’s parameters, a small ROI is selected which is small enough to be considered as a point on the object. In this way, the dynamic displacement of the object in the depth direction from the sensor to ROI is measured.
Using Cameras in ROS
Published in Wyatt S. Newman, A Systematic Approach to Learning Robot Programming with ROS, 2017
Rectification for the left camera is essentially a 3×3 $ 3 \times 3 $ identity matrix, meaning this camera is already virtually perfectly aligned in the epipolar frame. The epipolar frame right-camera rectification rotation matrix is also essentially the identity, since the Gazebo model specified the right camera to be parallel to the left camera (and thus no rotation correction is necessary). camera matrix The camera matrix contains intrinsic parameters of each camera. Note that, for both cameras, (uc,vc)=(319.5,239.5) $ (u_c,v_c) = (319.5,239.5) $ , which is the middle of the sensor plane (as mandated by the option fix-principal-point). Also, fx=fy $ f_x=f_y $ for both left and right cameras, which was enforced as an option. However, fx=1033.95 $ f_x=1033.95 $ for the left camera, and fx=1036.33 $ f_x= 1036.33 $ for the right camera. These should have been identical at fx=1034.47 $ f_x = 1034.47 $ . This shows that the calibration algorithm (which performs a non-linear search in parameter space) is not perfect (although the identified values may well be adequately precise).
A survey on vision guided robotic systems with intelligent control strategies for autonomous tasks
Published in Cogent Engineering, 2022
Abhilasha Singh, V. Kalaichelvi, R. Karthikeyan
The general classification of camera calibration methods is conventional visual calibration, camera self-calibration, and active vision-based calibration (Long & Dongri, 2019). In traditional visual calibration, intrinsic parameters and extrinsic parameters are computed. The intrinsic parameters like focal length, skew, the principal point of the lens is constant whereas external parameters like translational and rotation components are dynamic. Traditional calibration methods include the DLT algorithm, Tsai calibration method (Tsai, 1987), linear transformation method and RAC-based calibration Method, Zhang Zhengyou calibration method based on the chessboard (Zhang, 2000). The most commonly and widely used technique for calibration is Zhang<apos;>s Method. In this technique, checkboard pattern images are taken at different angles using a pinhole camera. Next, the 4 × 3 camera matrix is obtained which is the combination of the intrinsic and extrinsic matrices. Using these matrices, a camera to world transformation can be found by mapping the checkboard corner points and world corner points. The world to camera coordinates are transformed using extrinsic matrix whereas camera to pixel coordinates are obtained using an intrinsic matrix. The intrinsic matrix I and extrinsic matrix E of the pinhole model is given by
Modelling and calibration of a five link elastic boom of a mobile concrete pump
Published in Mathematical and Computer Modelling of Dynamical Systems, 2023
M. Meiringer, A. Kugi, W. Kemmetmüller
where is the so-called camera matrix of camera . It is obtained by intrinsic calibration using a chequerboard and extrinsic calibration using known reference points of the machine. Thereby, the intrinsic calibration aims at minimizing the lens and sensor distortions by reducing the quadratic error between measured and known corner points of a standard black and white chequerboard, see, e. g. [23]. With the extrinsic calibration, the exact position and orientation of the camera frame w. r. t. the inertial coordinate frame is found.
A marker-free method for structural dynamic displacement measurement based on optical flow
Published in Structure and Infrastructure Engineering, 2021
Jinsong Zhu, Ziyue Lu, Chi Zhang
With the camera matrix, the distortion parameters could be calculated and the images could be modified. On the other hand, for cameras with no apparent lens distortion, the correction step is not necessary. Naturally for the monitoring measurements, it is preferable to locate the target region in the central area of the field of view, which suffers less lens distortion (Xu et al., 2018).