Explore chapters and articles related to this topic
An Overview of Deep Learning in Industry
Published in Jay Liebowitz, Data Analytics and AI, 2020
Quan Le, Luis Miralles-Pechuán, Shridhar Kulkarni, Jing Su, Oisín Boydell
An autonomous car or self-driving car is “a vehicle that is capable of sensing its environment and navigating without any human input” (Hussain, 2016). Deep learning approaches are often used for object recognition as part of an autonomous driving pipeline, and technologies based on these systems dominate the current commercial autonomous driving efforts. Google’s self-driving car unit, for example, started in 2009 and in the next seven years drove over two million miles of test journeys on open roads. This car implements deep learning models extensively for object recognition. Similarly, the Tesla Autopilot system incorporates Tesla-developed deep learning systems based on CNNs for the tasks of recognizing objects through vision, sonar, and radar sensors. There are also examples of smaller startup self-driving car companies such as Drive.ai, which created a deep learning-based software for autonomous vehicles, or Tealdrones.com, a startup that equips drones with onboard deep learning modules for image recognition and navigation.
Potential Applications of Internet of Things Technology
Published in Abhik Chaudhuri, Internet of Things, for Things, and by Things, 2018
Tesla vehicles also have forward-facing radar that can see through heavy rain, fog, dust and cars ahead. A deep neural-network based vision processing tool named Tesla Vision processes vision data, server data and radar data on a powerful computer onboard. Tesla Vision deconstructs the car’s environment at great levels of reliability. Tesla Autopilot can match speed to traffic conditions, can keep within driving lanes or automatically change lanes, transition to freeways, self-park in a parking spot and can be summoned to and from a garage. These vehicles can also have over-the-air software updates as needed.
Investigating the effects of age and disengagement in driving on driver’s takeover control performance in highly automated vehicles
Published in Transportation Planning and Technology, 2019
Shuo Li, Phil Blythe, Weihong Guo, Anil Namdeo
The existing advanced driver-assistance systems (ADAS) provide drivers with different types of assistance, some of them have provided drivers with the support of operating the longitudinal or/and lateral controls of the vehicle, for example, Adaptive Cruise Control (ACC), Lane Keeping Assist (LKA), Intelligent Speed Adaption (ISA) and Tesla Autopilot (Guo et al. 2010, 2013; Lin, Ma, and Zhang 2018). Although these systems enable drivers to be physically disengaged from driving occasionally, they are not allowed to take their eyes off the road and must be constantly monitoring the system driving (SAE 2014; DfT 2015b; Li et al. 2019). The potentially forthcoming roll-out of the highly automated vehicles (HAV), referred to Level 3 automation (SAE 2014), in which the level of vehicle autonomy has been further increased, allowing the drivers to be completely disengaged from driving, they can now take their eyes off the road and safely perform other non-driving related tasks during automated driving, such as reading, watching a movie, using mobile phones, while the drivers are expected to take over the vehicle control in some situations (SAE 2014; DfT 2015b; Li et al. 2019).