Explore chapters and articles related to this topic
Augmented Reality for Sports Spectating and Coaching
Published in Veronika Tzankova, Michael Filimowicz, Interactive Sports Technologies, 2022
Stefanie Zollmann, Tobias Langlotz, Holger Regenbrecht, Chris Button, Wei Hong Lo, Steven Mills
The localization methods described in the previous section allow for an initial global alignment between the user's device and the real environment. After initialization, tracking methods allow for a continuous update of this relationship in order to compute the camera pose required for placing the AR content. Simultaneous Localization and Mapping (SLAM) approaches are often used as tracking technology in AR. SLAM is a tracking algorithm that has its origin in robotics and simultaneously computes a map of the environment while at the same time using it for localizing a device or camera (Davison & Murray, 2002). Within the mapping part of the algorithm often a 3D map is computed from 2D image features such as SIFT. These image features are identified and tracked in consecutive frames seen by a camera. Their 2D location in the image is then used to compute 3D rays and to compute the 3D position of those 2D image features using triangulation. The 3D position information is stored in a map that is then used for computing the positioning (and orientation) information of the device or camera. While traditional robotic SLAM approaches use stereo cameras or additional sensors (e.g., laser), for mobile AR applications monocular approaches have been developed to support single camera devices.
Applications of Deep Learning in Robot Vision
Published in Mahmoud Hassaballah, Ali Ismail Awad, Deep Learning in Computer Vision, 2020
Javier Ruiz-del-Solar, Patricio Loncomilla
Another important area of research is the combination of new methods based on deep learning with classical vision methods based on geometry, which has been very successful in robotics (e.g., visual SLAM). This topic has been addressed at recent workshops [105] and conferences [106]. There are several ways in which these important and complementary paradigms can be combined. On the one hand, geometry-based methods such as Structure from Motion and Visual SLAM can be used for the training of deep learning-based vision systems; geometry-based methods can extract and model the structure of the environment, and this structure can be used for assisting the learning process of a DNN. On the other hand, DNNs can also be used for learning to compute the visual odometry automatically, and eventually to learn the parameters of a visual SLAM system. In a very recent study, STMs were extended to 3D spatial transformations [98], allowing end-to-end learning of DNNs for computing visual odometry. It is expected that STM-based techniques will allow end-to-end training for optical flow, depth estimation, place recognition with geometric invariance, small-scale bundle adjustment, etc. [98].
Perception
Published in Hanky Sjafrie, Introduction to Self-Driving Vehicle Technology, 2019
As we saw in the previous section, building an accurate map is only possible when the exact pose of the vehicle is known. In reality, determining the exact pose of the vehicle is a difficult task due to sensor measurement errors. Simultaneous Localization and Mapping (SLAM) tries to solve this classic chicken-and-egg problem. SLAM is a technique that seeks to build an accurate map of the environment while simultaneously localizing the vehicle within that map. As shown in Figure 3.5, the problem is that both the observed map features and the vehicle’s reported location suffer from errors, and these errors increase the further the vehicle travels from its last known location. However, thanks to loop closure, once the whole route has been driven in a loop more than once, SLAM can generate a consistent and accurate map. The resulting map can then be used to accurately localize the vehicle anywhere within that map by performing the same SLAM algorithm in localization mode only (without mapping).
Comparative analysis of mobile laser scanning and terrestrial laser scanning for the indoor mapping
Published in Building Research & Information, 2023
Abdurahman Yasin Yiğit, Seda Nur Gamze Hamal, Ali Ulvi, Murat Yakar
The WMLS system is a technique based on instant positioning and mapping (SLAM) algorithm. This technique enables the creation of a map of an unknown environment by passing with range sensors while simultaneously determining the system located on the map (Di Filippo et al., 2018; Niu et al., 2017). Thus, it provides the opportunity to obtain point clouds by moving from different areas and scanning from different locations (Cui et al., 2019). Working with the SLAM algorithm, the device receives data from sensors (Velodyne Puck LITE LIDAR sensor, IMU sensor) in order to locate where it is in the environment. The SLAM algorithm uses these data to detect the variability of the geometric (wall, floor, column, etc.) objects around a system to create a locally coordinated environment map and to calculate the best estimate of where it is while determining its location. SLAM algorithm is used based on map and model creation processes in WMLS devices (Chen et al., 2018; Wang et al., 2020).
Fast Bayesian graph update for SLAM
Published in Advanced Robotics, 2022
Shun Taguchi, Hideki Deguchi, Noriaki Hirose, Kiyosumi Kidono
Bayesian-filtering-based methods have also been studied in recent years [34–36]. These methods have been used extensively in SLAM in autonomous mobile robots, mainly with the use of LiDAR. The most popular approaches have a current pose and map as states, as is the case in FAST-SLAM [35] and EKF-SLAM [36]. However, in recent VSLAM, Bayesian-filtering-based approaches have lagged behind bundle-adjustment-based (BA-based) approaches in terms of accuracy and computation cost of handling a large number of image features for local pose estimation [37]. The advantage of Bayesian filtering is that it can approach the optimal solution by sequentially processing time-series observations. Our BGU employs Bayesian filtering for pose graph estimation for enabling the sequential execution of loop closing and accelerating it. In addition, by employing Bayesian filtering for only graph updates, we are able to incorporate state-of-the-art BA-based methods for local pose estimation.
Improving the prevention of fall from height on construction sites through the combination of technologies
Published in International Journal of Occupational Safety and Ergonomics, 2022
María del Carmen Rey-Merchán, Jesús M. Gómez-de-Gabriel, Juan-Antonio Fernández-Madrigal, Antonio López-Arquillos
If we relax this computational power limitation, e.g., through remote computation (at the expense of a shorter battery life due to the communications), we could afford improvements in the estimation of the closeness of the receiver and the beacons. In particular, we could then implement a range-only simultaneous location and mapping (SLAM) algorithm. SLAM is defined as the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location [33]. Such a solution would allow one to adapt automatically – after a suitable period of walking through the environment – to multiple beacons placed in diverse, unknown locations. Since each beacon can carry its own identifier, which is transmitted to the receiver, a complete map of the environment, i.e., the set of beacons and their locations, plus a full path of the worker would be provided, unlike in our previous solution, where we only estimate his/her instantaneous distance to the beacons. The map and the full path are very interesting to trace worker behavior in the case of an accident, and also to have a more robust deduction of the entry to risky zones.