Explore chapters and articles related to this topic
Smarter Cars
Published in Patrick Hossay, Automotive Innovation, 2019
One way to ensure this is with a purpose-built high-definition map of everything, roads, buildings, posts, signals, signs, curbs, and all. This is a tall order and one reason why Google, Uber and others have driven millions of self-driving miles mapping the environment. Pre-driving specific roads allow a test vehicle to collect sensor data from optical, GPS, and lidar sensors, compare this to existing maps and data, and adjust maps accordingly. This process is tricky since temporary or minor discrepancies might be recorded as major obstacles. Imagine a few traffic cones being mapped as a fixed traffic barrier. So, multiple passes and continuous updating is needed. The system can face difficulty when distinct landmarks are not available, over a bridge, through a tunnel, perhaps driving through a desert or cornfield could present a challenge.
A framework for definition of logical scenarios for safety assurance of automated driving
Published in Traffic Injury Prevention, 2019
Hendrik Weber, Julian Bock, Jens Klimke, Christian Roesener, Johannes Hiller, Robert Krajewski, Adrian Zlocki, Lutz Eckstein
Movable objects and their relations are defined in layer 4. This especially includes all traffic participants such as motor vehicles and vulnerable road users. Layer 5 describes environmental variables that have an influence on the other layers; for example, rain lowering the friction coefficient in layer 1. The model initially consisting of 5 layers (Bagschik et al. 2018b) was extended by a sixth layer, which covers digital information such as the availability of high-definition map data or vehicle-to-everything communication (Bock et al. 2018).
A new active safety distance model of autonomous vehicle based on sensor occluded scenes
Published in International Journal of Modelling and Simulation, 2021
Chaochun Yuan, Tong Wang, Jie Shen, Youguo He, Shuofeng Weng
This paper proposes a driving speed control method by predicting motion characteristics of obstacles in sensor occluded scenes using camera, high definition map and Global Position System (GPS). The research results can increase driving safety for advanced driving assistance systems and autonomous driving systems in sensor occluded scenes.
Analysis on autonomous vehicle detection performance according to various road geometry settings
Published in Journal of Intelligent Transportation Systems, 2023
Jaehyun (Jason) So, Jun Hwangbo, Sang Hyun Kim, Ilsoo Yun
In addition to the use of a single AV sensor, a number of studies have investigated sensor fusion technologies using data from multiple sensors on an AV (Kocic et al., 2018). Premebida et al. (2007) used in-vehicle LiDAR and monocular vision for identifying vehicles and pedestrians, respectively. In this study, a Bayesian sum decision rule was used to combine the two classification results from LiDAR and vision sensors. As a result, a more reliable object classification capability was achieved. Ilci and Toth (2020) assessed the performance of AV sensor systems in terms of obtaining a high-definition map by tracking objects using a test vehicle equipped with several Velodyne LiDAR sensors and a high-performance global navigation satellite system or inertial measurement unit navigation system. De Silva et al. (2018) combined LiDar and camera image sensors to improve the detection of the surroundings. They utilized a Gaussian process (GP) regression algorithm to distinguish between the datasets of the two sensors and thereby interpolate the differences according to the GP-based resolution matching algorithm. Kim et al. (2015) investigated how the cooperative sensor system can increase the perception capability of AV including line-of-sight and field-of-view of road. The cooperative perception used in this study is the combination of the AV sensors (e.g., radar and LiDar) and wireless communication, and it turned out that the cooperative perception enabled AV to react to the forward situation 1.43 seconds earlier. Aoki et al. (2020) also investigated the perception capability of the combination of vision (i.e., camera), LiDar, and radar in terms of the detection of surrounding objects. To prove the benefit of sensor fusion, this study proposed cooperative perception scheme with deep reinforcement learning and investigated how much the detection performance can be increased. Besides, some efforts were made in the recent decade in exploring the benefit of sensor fusion. (Dai et al., 2021;; Hobert et al., 2015; Kim & Liu, 2016) However, it is still doomed how insufficient the use of homogenous sensors is in terms of the perception capability of AV, while the recent studies focused on the potential benefit of sensor fusion.