Explore chapters and articles related to this topic
Object Detection and Tracking in Video Using Deep Learning Techniques: A Review
Published in K. Gayathri Devi, Mamata Rath, Nguyen Thi Dieu Linh, Artificial Intelligence Trends for Data Analytics Using Machine Learning and Deep Learning Approaches, 2020
Dhivya Praba Ramasamy, Kavitha Kanagaraj
Moving objects (single or many) viewed by a camera over a time period can be followed using video tracking. One major issue is object clutter. Often the object region of interest is similar to its background or sometimes it may be hidden by another object present in the scene. The presence of an object in the scene can cause problems for the following reasons (Figure 3.1 [11]): Object position: the presence of an intended object in a video frame may vary from one frame to another frame and also vary its projection on a video frame plane.Ambient illumination: the ambient light on the object of interest can change in intensity, direction, and color in a video plane.Noise: the video may contain some amount of noise during signal acquisition, based on the quality of the sensor during signal acquisition.Occlusions: a moving object in a scene may be hidden by another object present in the same scene; the object cannot be tracked even if it is present in the scene.
Video Object Tracking
Published in Maheshkumar H. Kolekar, Intelligent Video Surveillance Systems, 2018
Video object tracking is the process of estimating the positions of a moving objects over time using a camera. Tracking creates the trajectory of an object in the image plane as it moves in the scene. A consistent label can be assigned by the tracker to the moving objects in different frames of a video. A tracker can also provide additional information about moving objects such as orientation, area, and shape. However, it is a challenging task because of the loss of information caused by projection of the 3-D world onto a 2-D image, occlusion, noise, changes in illumination, and complex object motion. Also, most of the applications have real-time processing requirements which is also a challenge. Video tracking is used in human activity recognition, human-computer interaction, video surveillance, video communication, video compression, traffic control, and medical imaging. Since there is increased demand for automated video analysis, a lot of research is going on in the area of object tracking [22].
A Comparative Study of Illumination Invariant Techniques in Video Tracking Perspective
Published in IETE Technical Review, 2020
C. S. Asha, A. V. Narasimhadhan
The task of video tracking is to determine the target’s location in every frame, where an object is specified by the user in the first frame. In literature, mainly two types of trackers have been proposed, generative and discriminative. The generative techniques construct an object model to describe the visual appearances and scan for a similar area in successive frames. Generally, these techniques include pixel intensity matching, histogram matching, matching based on subspace or dictionary representation. Some of the generative techniques are intrinsically illumination invariant such as the normalized cross-correlation (NCC) tracker [2], which uses pixel matching based on normalized pixel values. However, pixel values vary during illumination changes and direct pixel matching or histogram matching techniques often lose the target very easily. This category includes Mean Shift tracking (MS) [3] which makes use of histogram matching, Corrected Background Weighted Histogram matching technique (CBWH) [4] uses matching of background weighted histogram, incremental video tracking [5] uses matching through subspace techniques, L1 tracker utilizes matching based on sparse representation [6], Distribution Field Tracker (DFT) [7] uses matching of distribution fields, particle filters based on sampling techniques [8] and MFT [1] uses optical flow method.