Explore chapters and articles related to this topic
Advanced Sensors and Vision Systems
Published in Marina Indri, Roberto Oboe, Mechatronics and Robotics, 2020
Visual odometry is the process of recovering the related position and orientation of a robot by analyzing the associated vision devices. Alternatively, visual odometry is defined as the estimation of the egomotion of the robot using only cameras mounted on the robot [20, 21]. It aims to recover the parameters of the equivalent odometry data using sequential camera images to estimate the motion of travel by the robot. The visual odometry technique allows enhancement of the navigation precision of robots in a variety of indoor or outdoor environments and terrain. An example of visual odometry estimates a robot's motion using camera images to extract relative features between two sequential images. Vision-based motion estimation is a very important task in applications of autonomous navigation of robots. The general flowchart of motion estimation based on vision sensors is illustrated in Figure 11.1.
Real-time 3D reconstruction using SLAM for building construction
Published in Konstantinos Papadikis, Chee S. Chin, Isaac Galobardes, Guobin Gong, Fangyu Guo, Sustainable Buildings and Structures: Building a Sustainable Tomorrow, 2019
The main functionality of visual odometry is to estimate camera motion based on images taken. Images contain a matrix of brightness and color. It is difficult to estimate camera motion directly from a matrix of data. Therefore, some representative points are extracted from the image first and these points will remain the same after changing the positon and orientation of the camera. These representative points are called Features in visual SLAM. The camera position and orientation will be estimated by using calculating the changes of these Features. The corners and edges in the image are taken as features since they are more distinguishable among different images. However, when the camera position is far away from one object, the corner may not be recognized. Therefore, in this research, Oriented FAST and Rotated BRIEF (ORB) is used to detect Features based on four characteristics: repeatability, distinctiveness, efficiency and locality (Rublee et al. 2011). The ORB feature is composed of two parts: the key point and descriptor. Its key point is called Oriented Features from Accelerated Segment Test (FAST), which is the FAST corner point (Rosten et al. 2006). Its descriptor is called the Binary Robust Independent Elementary Features (BRIEF) (Calonder et al. 2010). After the ORB features are calculated, a fast library for approximate nearest neighbors (FLANN) is used to match ORB features in each image (Muja et al. 2014). Finally, the camera positon and orientation is calculated using PnP (Perspective-n-Point) with the positon data of the ORB features (Lepetit et al. 2013).
Multisensor Precise Positioning for Automated and Connected Vehicles
Published in Hussein T. Mouftah, Melike Erol-Kantarci, Sameh Sorour, Connected and Autonomous Vehicles in Smart Cities, 2020
Mohamed Elsheikh, Aboelmagd Noureldin
Cameras and vision systems are used to detect features around the vehicle such as road and lane edges. Cameras also are useful in obstacle detection and avoidance. The term visual odometry refers to vision-based positioning where the camera frames can be processed and utilized for pose estimation [22,23]. Because visual odometry integrates small incremental motions over time, it is bound to drift due to the variations observed in the scene and, hence, is better to be integrated with other navigation systems to compensate for this drift. Cameras perform well in textured environments with visible features and good illumination; nevertheless, visual odometry performance degrades in low-visibility and low-textured environments.
Bi-direction Direct RGB-D Visual Odometry
Published in Applied Artificial Intelligence, 2020
Jiyuan Cai, Lingkun Luo, Shiqiang Hu
Visual simultaneous localization and mapping (vSLAM) and visual odometry (VO) are the important tasks in computer vision and robotics community. VO can be considered as a subproblem of vSLAM and focuses on estimating the relative motion between consecutive image frames. In addition to monocular (Lin et al. 2018) and stereo cameras (Cvišic et al. 2017; Ling and Shen 2019), in recent years, visual odometry with RGB-D sensors has gained superior attentions in various research aspects (Zhou, Li, and Kneip 2018) and successfully applied to unmanned aerial vehicle (Iacono and Sgorbissa 2018), autonomous ground vehicle (Aguilar et al. 2017a), augmented reality, and virtual reality (Aguilar et al., 2017b).
Estimation of trailer off-tracking using visual odometry
Published in Vehicle System Dynamics, 2019
Christopher de Saxe, David Cebon
Visual odometry is the estimation of the pose and motion of a camera through a 3-D scene. Advances in visual odometry algorithms have resulted in its widespread use for autonomous road vehicles and mobile robotics. Compared to other odometry systems such as wheel speed sensors and GPS ,1 visual odometry offers high precision, low-cost hardware and independence from traction conditions. Although vehicle-based visual odometry is commonplace in autonomous vehicles (see, for example, [22,24]), little work has been done with heavy vehicles, with the exception of recent work by Miao, Harris, de Saxe and Cebon [18,25–27].
UAV navigation based on adaptive fuzzy backstepping controller using visual odometry
Published in International Journal of Modelling and Simulation, 2022
Abdelghani Boucheloukh, Farès Boudjema, Nemra Abdelkrim, Fethi Demim, Fouad Yacef
The visual odometry algorithm is mainly based on the accuracy of feature matching in subsequent images. Matched features characterize a part of an image using a unique descriptor [44]. Numerous studies have been conducted on the extraction of the feature points in an image. For this purpose, the most well-known algorithm is the Harris corner detector [45]. In [46], Lowe proposed a scale-invariant feature transform (SIFT) algorithm for feature extraction and matching.