Explore chapters and articles related to this topic
Advanced Sensors and Vision Systems
Published in Marina Indri, Roberto Oboe, Mechatronics and Robotics, 2020
There are some discussions on applications of multiple advanced sensors to the field of automotive navigation in ITS. The motion and localization estimation of an outdoor robot are addressed using a combination of cameras and laser range finder (LRF) sensors based on an incremental method combined with global positioning system (GPS) information. The perspective camera and omnidirectional camera as well as LRF sensors are investigated to utilize the strengths of each device. Motion estimation based on sequential images for autonomous navigation is an incremental approach because of the processing time when the robot moves over a long distance. One solution to speed up the computing system, the vision-based compass approach, is also proposed and applied. The cumulative error of visual odometry is included using GPS correction under maximum likelihood estimation in a filter framework, such as the extended Kalman filter (EKF) technique. Practical experiments in a variety of conditions demonstrate the accuracy of robot localization using the method based on integration of multiple sensors. Furthermore, a robust global motion method is presented using corresponding sequential images based on a bundle adjustment technique combined with GPS information.
Visual Surveillance of Natural Environments: Background Subtraction Challenges and Methods
Published in Lavanya Sharma, Pradeep K Garg, From Visual Surveillance to Internet of Things, 2019
Thierry Bouwmans, Belmar García-García
In forests, the foreground objects to be detected correspond to animals or humans. The challenges to be solved are the partial occlusions of the objects of interest and the abrupt transitions between the light and shadows caused by the continuous movement of the foliage of the trees. Boult et al. [72] conducted experiments using an omnidirectional camera to detect humans. Subsequently, Shakeri and Zhang [73] used a camera trap to detect animals in clearing zones of the forest implementing a robust PCA method proving to be better than most previous PCAs in the illumination change dataset (ICD) [73].
Navigation behavior based on self-organized spatial representation in hierarchical recurrent neural network
Published in Advanced Robotics, 2019
Wataru Noguchi, Hiroyuki Iizuka, Masahito Yamamoto
The mobile robot is modeled as an agent that can move around the environment. The agent, who is equipped with an omnidirectional camera, can obtain an omnidirectional view while moving around the arena. The agent can move to eight adjacent cells in one time step. The omnidirectional camera is used as a capturing system that consists of four cameras covering the entire view around the agent. Each camera faces a different direction (north, south, east, or west). The size of the captured visual image for each camera is , and each pixel of the image has three channels (RGB). Four captured images are used as a single image by combining them horizontally, and the size of the visual image input is with the three colored channels. While capturing the images, the positions of the cameras are perturbed by Gaussian noise with a mean of zero and standard deviation of 0.1, where the size of single grid cell is defined as 1.