Explore chapters and articles related to this topic
In-Process Measurement in Manufacturing Processes
Published in Wasim Ahmed Khan, Ghulam Abbas, Khalid Rahman, Ghulam Hussain, Cedric Aimal Edwin, Functional Reverse Engineering of Machine Tools, 2019
Ahmad Junaid, Muftooh Ur Rehman Siddiqi, Riaz Mohammad, Muhammad Usman Abbasi
Extrinsic parameters of the camera tell the camera location in the space in relation to the object. Camera calibration is useful in applications such as machine vision, where the actual size of the object is measured using images. The following equation provides the transformation that relates a world coordinate in the checkerboard’s frame [XYZ] and the corresponding image point [xy]: s[xy1]=[XYZ1][Rt]⋅K,
Camera Network for Surveillance
Published in Maheshkumar H. Kolekar, Intelligent Video Surveillance Systems, 2018
Camera calibration is the process of estimating parameters of the camera using images of a special calibration pattern. The parameters include camera intrinsics, distortion coefficients, and camera extrinsics. We can use these parameters to correct for lens distortion, measure the size of an object, or determine the location of the camera in the scene. Multi-camera calibration (MCC) maps different camera views to a single coordinate system. In many surveillance systems, MCC is a key pre-step for other multi-camera-based analysis. It is the process of estimating intrinsic and/or extrinsic parameters. Intrinsic parameters deal with the camera’s internal characteristics, such as its focal length, skew, distortion, and image center. Extrinsic parameters describe its position and orientation in the world. Extrinsic parameters are used in applications such as to detect and measure objects, navigation systems for robots, and 3-D scene reconstruction.
Laser velocimetry
Published in Stefano Discetti, Andrea Ianiro, Experimental Aerodynamics, 2017
Once you have established the desired camera field of view and laser position, the next step before recording flow data is typically to acquire a series of images that can be used to determine the physical location and size of the camera field of view. This process is referred to as camera calibration and is most often performed using a calibration target. These targets are usually gridded with evenly spaced lines or symbols printed or engraved in high contrast to make accurately locating them easy in the resulting images, and often include reference marks to establish the position and orientation of the overall object. Such a target should be manufactured to high precision because any error in construction will translate directly into systematic uncertainty in the resulting physical magnitudes of the estimated velocity fields (see the “Uncertainty due to image calibration” section for further detail). However, due to geometric constraints, it is not always possible to place and remove such a target without disturbing the rest of the experiment. In these cases, other options can be used, including the inclusion of fiducials (objects with known size and position) in the experimental domain that will remain in place during testing, or the measurement of an experimental model or other feature that will be present in the field of view during the experiment; sometimes several of these approaches are combined for redundancy.
Vehicle trajectory data extraction from the horizontal curves of mountainous roads
Published in Transportation Letters, 2022
V.A. Bharat Kumar Anna, Suvin P. Venthuruthiyil, Mallikarjuna Chunchu
In videography-based trajectory extraction, co-ordinate transformation is a major step for correcting the distortion in images and map the image coordinates to the real-world coordinates. Researchers have proposed various camera calibration techniques to correct the distortion in images and transform the coordinates (Fung 2003; He 2007; Tsai 1987; Schoepflin and Dailey 2003; Wang and Tsai 1991). The camera calibration is done by computing both extrinsic and intrinsic parameters of the camera where extrinsic parameters of a camera indicate the position and orientation with reference to the real-world coordinates. The intrinsic parameters indicate the inherent properties of the camera such as focal length, optical length, image scaling factor and the lens distortion coefficients. In general, the intrinsic parameters remain the same except for the focal length as it may vary for different purposes. However, the intrinsic parameters can be calibrated prior to the use of the camera.
Regularised digital-level corrections for infrared image correlation
Published in Quantitative InfraRed Thermography Journal, 2018
A. Charbal, S. Roux, F. Hild, L. Vincent
One of the camera calibration steps consists in correcting optical lens distortion. This can be performed by using a patterned calibration target, which is pictured by the imaging system [11]. A reference image is generated numerically (i.e. synthetic target) and hence is free from any deformation due to lens distortion. The displacement fields obtained by registering the pictured and synthetic target images are interpreted as distortions (after subtraction of rigid body motions for positioning and scaling). Two strategies can be applied. The first one is to use an integrated DIC algorithm that directly provides the parameters of distortion models (such as Zernike polynomials [12,13]). The second approach is to use a generic DIC algorithm and either low-pass filter the measured displacement or project it onto a suited basis (such as generic distortion fields [5]). In both cases, it is necessary to consider grey-level variations between the reference and deformed images, which are from different origins, (see Figure 15). In the case of IR cameras for instance the frames are often blurred in the edges (Figure 15(a)).
Generation and quality improvement of 3D models from silhouettes of 2D images
Published in Journal of the Chinese Institute of Engineers, 2018
Watchama Phothong, Tsung-Chien Wu, Chun-Yeh Yu, Jiing-Yih Lai, Douglas W. Wang, Chao-Yaug Liao
Multiple images of an object are captured using a camera from different viewing directions. Camera calibration is a prerequisite process as it provides the camera parameters as well as the position and orientation of the camera in 3D space. Parameters such as the focus and aspect ratio of the camera, camera position, viewing direction, and upward direction are obtained during the calibration (Liao et al. 2016). All of the position and orientation vectors are associated with a coordinate system located at the center of the turntable where the object was placed. Two different sets of images are captured in the proposed process. For the first set, images of the patterned mat are captured uniformly around the center of the turntable, depending on the number of pictures to be captured around an object. For the second set, images of the object are captured at the same angles as those for the patterned mat. The boundary of each object image is called a silhouette. The silhouette profile is essentially a trajectory of pixel points on the object boundary. A point reduction algorithm is employed to reduce the density of points on the silhouette profile, which yields a set of silhouette points on an image. All silhouette points from multiple images are used to calculate 3D points of the object.