Explore chapters and articles related to this topic
Human Detection and Tracking Using Background Subtraction in Visual Surveillance
Published in Lavanya Sharma, Towards Smart World, 2020
In recent times, motion-based object detection has become one of the major areas of attention of research in artificial intelligence (AI) applications [1, 2, 3]. Detection of an object is a vital step in any computer vision and machine learning applications because of its real-time applications in tracking of moving objects, visual surveillance, pedestrian detection, indoor/outdoor security, suspicious activity detection, logo detection, e-health department, traffic monitoring, location prediction using IP, cameras, defense, and automated driven cars. Several techniques have been developed to differentiate the foreground (FG) object from the background (BG) scene of a video sequence [4, 5, 6]. Usually, the BG contains insensate objects that remain static in the scene [1, 2, 7, 8]. These objects can be motionless such as change in lighting condition, room furniture, walls, doors, wavering bushes, escalators, slow leafy movement, or changing lights. In order to detect the moving objects in real-time applications, several methods are used such as optical flow and the background subtraction (BGS) method.
Older pedestrians and cyclists
Published in Carryl L. Baldwin, Bridget A. Lewis, Pamela M. Greenwood, Designing Transportation Systems for Older Adults, 2019
Carryl L. Baldwin, Bridget A. Lewis, Pamela M. Greenwood
As mentioned briefly in Chapter 4, there are also technological approaches to reducing risk to pedestrians. Some vehicle manufacturers have developed “pedestrian detection” systems that use a camera and radar to attempt to detect people (and bicyclists for some manufacturers) in the vehicle's path. These systems are designed to warn the driver and in some systems to automatically apply the brakes if needed. These systems do have their limits, which are detailed in owner's manuals. For example, systems may not recognize very short or very tall pedestrians, those carrying a large object, or pedestrians at dawn or dusk. These systems may not completely avoid a collision but could mitigate one. NHTSA researchers analyzed the effectiveness of pedestrian detection systems in the context of historical crash data and estimated such systems would prevent 5,000 vehicle-pedestrian crashes each year (810 fatal crashes).
Introduction
Published in Neil Collings, Fourier Optics in Image Processing, 2018
Computer vision is a project to reproduce the remarkable performance of the human visual system (HVS) in a machine. It is both important and difficult. Some generic and particular applications are: medical diagnosis; quality inspection; robotics; driverless cars; missile guidance; car number plate recognition; pedestrian detection; and tracking people in a crowd. Image recognition and identification in complex scenes are the areas most actively researched. Computer vision is conventionally divided into three functional areas: low-level, mid-level, and high-level vision. This division follows a hierarchical approach to the HVS. Low-level vision covers the segmentation of the image taking place between the eye and the cortex. Mid-level vision, involving colour, form, and movement, occurs in the prestriate cortex. Finally, high-level vision, such as recognition, occurs in the inferior temporal cortex. The neural signals do not travel in a single direction from the low- to the high- level: there is a feedback of neural activity between the three divisions. Experimental data show that visual comprehension of images is fast [206]. This has been explained on the basis of modelling using feature extraction operating within a hierarchical feedforward neural network architecture [234].
SSAD: Single-Shot Multi-Scale Attentive Detector for Autonomous Driving
Published in IETE Technical Review, 2023
Balaram Murthy Chintakindi, Mohammad Farukh Hashmi
To achieve a trade-off balance between detection accuracy and speed real-time pedestrian detection was proposed by Li Z et al. [28]. The proposed network has two modules: Region generation (RG) and Region Prediction (RP) which operate parallel to improve the processing speed. This network performance was evaluated only on INRIA, ETH, and Caltech pedestrian datasets and has achieved real-time detection accuracy. B Han et al [29] proposed a novel small-scale sense (SSN) network, which can generate some proposal regions and is effective when detecting small-scale pedestrians. Based on cross-entropy loss they proposed a novel loss function to increase the loss contribution of small-scale pedestrians which are hard to detect. This network performance is validated on two pedestrian datasets: Caltech and VIP [29].
An integrated damage modeling and assessment framework for overhead power distribution systems considering tree-failure risks
Published in Structure and Infrastructure Engineering, 2022
Computer vision is a technology related to the automatic extraction of useful information from image data to perform the human visual cortex tasks (Aloimonos & Rosenfeld, 1991). One of its major functions is to detect instances of semantic objects of a certain category in digital images and videos (Dasiopoulou, Mezaris, Kompatsiaris, Papastathis, & Strintzis, 2005). Central research and applications in object detection techniques include pedestrian detection and tracking (Melotti, Premebida, Goncalves, Nunes, & Faria, 2018), vehicle tracking and classification (Arabi, Haghighat, & Sharma, 2020), human detection (Maji, Berg, & Maliks, 2008), and human face detection and recognition (Singh & Goel, 2020). Recently, object detection has been applied to recognise trees from satellite images for urban tree locations detection (Shen, Yan, Wu, & Su, 2020), forest management (Tao et al., 2020) and hurricane-related roadway disruption analysis (Kakareko, Jung, & Ozguven, 2020; Kocatepe et al., 2019). Its successful application for detection of various objects proves the computer vision object detection technique to be a powerful tool to provide the geographical information of trees surrounding an OPDS in the fragility evaluation.
Optimizing Sector Ring Histogram of Oriented Gradients for human injured detection from drone images
Published in Geomatics, Natural Hazards and Risk, 2021
Marzieh Ghasemi, Masood Varshosaz, Saied Pirasteh, Ghazal Shamsipour
In the experiments carried out, unfortunately, the authors could not get hold of images taken after a real disaster. However, different image sets that almost entirely resemble real search and rescue task conditions have been used. The first dataset used is the INRIA dataset, which has been commonly used along with the HOG feature as a comprehensive benchmark for testing human and pedestrian detection algorithms in several studies (Ojala et al. 2002; Dalal and Triggs 2005; Benenson et al. 2014). The dataset includes ground images from human positions in standing, walking, and upright views only. In addition, since a person is viewed mostly from a top-down/oblique angle in a drone, many additional images taken by several drones at different angles were acquired and used. These include images taken by a DJI Tello from an oblique angle at low altitude, and those taken by Phantom 4 Pro, Inspire 2, and Parrot ArDrone 2.0, all taken nadir at different heights. Among these, in one of the sets, students were spread in a relatively large area. They took different poses, including standing, lying down, and sitting to closely resemble UAV images used in a real search and rescue operation. A Phantom 4 Pro took these images at 60 m height. Details of these images are given in the following lines.