Explore chapters and articles related to this topic
Real-time conversion of speech to sign language and hand gesture recognition
Published in Sangeeta Jadhav, Rahul Desai, Ashwini Sapkal, Application of Communication Computational Intelligence and Learning, 2022
Sangeeta Jadhav, Sumit Kumar, Harsh Chauhan, Supriya Negi, Vishvajeet Singh
Background subtraction is a widely used approach to detect moving objects from the background or in other words, to separate the foreground elements from the static background. In this technique, a foreground mask is generated. Its applications include moving object detection, object tracking, traffic detection, image segmentation, etc. The foreground masks are binary images containing the pixels belonging to moving objects in the sceneries. This mask is calculated by subtracting the current frame with a background model. We calculate the accumulated weighted average for the background and then subtract it from the frames containing the object in front of the background. After that, we subtract the average of the background from each frame to find any object covering the background. The threshold value for every frame is then calculated and contours are determined.
Detection of Moving Human in Vision-Based Smart Surveillance under Cluttered Background: An Application of Internet of Things
Published in Lavanya Sharma, Pradeep K Garg, From Visual Surveillance to Internet of Things, 2019
In a video surveillance system, the camera is mainly static with a dynamic background that gives an idea of moving object detection from a video [5–9]. Moving object detection can be performed with several techniques, such as the background subtraction technique, region-based image segmentation, temporal differencing method, active contour models method, and optical flow method. Background subtraction is a popular approach, which is suitable for the static camera and dynamic background that leads to detection of the moving object in the video by subtracting the current frame from the background reference frame [10–13]. The moving object can be detected by comparing the current video frames with the background frame. The camera could be fixed and the background could be dynamic [8, 14–17]. The background model requires an adaptive behavior for the changes with dynamic backgrounds.
Search, Tracking, and Surveillance
Published in Yasmina Bestaoui Sebbane, Intelligent Autonomy of Uavs, 2018
The challenges of moving object detection on-board UAVs include camera motion, small object appearances of only few pixels in the image, changing object background, object aggregation, and noise. In general, two modules are necessary for digital video stabilization: global motion estimation module and motion compensation module. A perfect motion correction needs an accurate global motion estimation. There are many methods that aim to estimate global motion accurately. A method for global motion estimation can be introduced by calculating the motions of four sub-images located at the corners of the image. A method based on circular blocks matching is also proposed in order to estimate local motion. The global motion parameters are generated by repeated least square. Global motion can be estimated by extracting and tracking corner features. When a UAV flies at hundreds of meters it provides a large region of surveillance, but with noisy and blurring images. Moving sensors in UAV surveillance system capture videos at low resolution and low frame-rate, which presents a challenge to motion detection, foreground segmentation, tracking, and other related algorithms. Scale and view variations and few pixels on region of interest are among these challenges. So the pre-processing is necessary to attain a better detection effect. Characteristics of the video itself, such as unmodeled gain control, rolling shutter, compression, pixel noise and contrast adjustment do not respect brightness constancy and geometric models. The background in the UAV surveillance platform changes frequently because of the high speed of the aircraft.
A method of perspective normalization for video images based on map data
Published in Annals of GIS, 2020
Bingxian Lin, Changlu Xu, Xin Lan, Liangchen Zhou
The validity of the perspective correction effect must be applied to specific video analysis methods. The obtained nonlinear perspective weights and the linearized perspective weights in the existing literature are applied to the post-processing stage of the moving target detection, and the results are compared and verified. Moving object detection is a method that distinguishes moving objects from background information in video sequence images; it is the basis of various video analysis algorithms and video compression algorithms (Kim 2003; Kim and Hang 2003). In video surveillance applications, usually within a certain period, the background of the video does not change; thus, more backgrounds are subtracted based on the moving target detection. A variety of background subtraction algorithms have been incorporated into the open-source BGS Library. We select five background subtraction algorithms that are widely used to separate the front/back scene of video images. For the extracted foreground image, two different perspective weights were used to denoise, and the actual motion pedestrian was retained by using the digital morphological method. The process is shown in Figure 7.
A traffic monitoring stream-based real-time vehicular offence detection approach
Published in Journal of Intelligent Transportation Systems, 2018
Automatic moving object detection, which segments moving objects from video streams, is a key technology in intelligent transportation systems (Huang & Chen, 2013; Oreifej, Li, & Shah, 2013). It is the preprocessing step for license plate detection from the traffic monitoring video stream (Cheng, Chen, & Huang, 2015). Moving object detection methods can be classified into three groups (Huang & Chen, 2014): temporal differencing, optical flow, and background subtraction. Background subtraction methods accomplish motion detection by comparing pixel features, which make the incoming image different from the reference background model of the previous images (Chen & Huang, 2015; Chen & Huang, 2014; Cheng, Chen, & Huang, 2015; Guo et al., 2013; Huang & Chen, 2014; Huang & Chen, 2013). Huang et al. (Chen & Huang, 2015; Chen & Huang, 2014; Huang & Chen, 2014; Huang & Chen, 2013) proposed methods for moving object detection from traffic monitoring video streams. The experimental results demonstrated that they can meet the requirement of real-time moving object detection. These methods solved the bandwidth limitation by a novel rate control scheme that alters the bit-rate to match the obtained network bandwidth. The article by Chen and Huang (2014) proposed to utilize a radial basis function network as its principle components.
Real-time video object detection and classification using hybrid texture feature extraction
Published in International Journal of Computers and Applications, 2021
N. Venkatesvara Rao, D. Venkatavara Prasad, M. Sugumaran
Moving object detection is an essential feature in any surveillance applications such as video analysis, video communication, traffic control, medical imaging, and military service. Generally, video frames encompass foreground with respect to the background information in which the feature points in the state of interest are foreground information and also the enduring feature points are measured as the background information.