Explore chapters and articles related to this topic
RFID-Enabled Privacy-Preserving Video Surveillance: A Case Study
Published in Syed Ahson, Mohammad Ilyas, RFID Handbook, 2017
Jehan Wickramasuriya, Sharad Mehrotra, Nalini Venkatasubramanian
Even though we have realized a fully functional implementation of our framework, the deployment of a such a system should eventually be pushed to the camera level. In this implementation, we tightly coupled the processing capabilities (of the PC) to the camera and processed everything in real time with no archival of the original video data (only the processed video stream is available). Ideally this processing capability should reside at the camera level to make the privacy preservation in media spaces more acceptable to the end users. Optimization of the algorithms used here for object tracking and masking for privacy is a key component of such a realization as well as the possible use of MPEG-4 video. MPEG-4 has a superior compression efficiency, advanced error control, objectbased functionality, and fine-grain scalability making it highly suitable for streaming video applications. Recently, MPEG-4 has emerged as a potential front runner for surveillance applications because of its layered representation, which is a natural fit for surveillance tasks as they are inherently object based. It is also desirable to find people moving in the scene, independent of the background. Further, more fine-grained localization via the use of RFID and other low-cost sensors is also something that improves the accuracy of the system. One of the features of these types of integrated applications is that the overall system is as strong as each of the various subcomponents; thus it is possible to improve them somewhat independently with the infrastructure in place.
Business—Technology Interface
Published in Klaus Diepold, Sebastian Moeritz, Understanding MPEG-4, 2012
Klaus Diepold, Sebastian Moeritz
Bearing in mind that broadcasters need to be able to serve their content on multiple delivery platforms such as TV, Internet, broadband, DVD, wireless, VHS, and so on, they are in need of solutions that provide them with the utmost flexibility, scalability, functionality, speed, and reliability. Many of the systems currently in situ are not likely to be able to manage this additional workload in the most efficient way possible. This is exactly where MPEG-4 comes in. While MPEG-2 is currently regarded as the de facto video coding standard in digital broadcasting, MPEG-4 offers improved coding efficiency resulting in higher quality, particularly at low bit rates, of coded video and audio. With utilization spread across low, intermediate, and high bitrates, MPEG-4 offers a significant advantage compared to other video standards, enabling encoded data to be accessible over a wide range of media in various qualities. MPEG-4 is an extremely interesting format due to its coverage of many types of applications and wide ranges of resolutions, qualities, bit rates, and services. In addition, taking the latest developments into consideration, MPEG-4 will eventually succeed MPEG-2 as the dominant broadcast format.
Digital Rights Management Issues for Video
Published in Borko Furht, Darko Kirovski, Multimedia Encryption and Authentication Techniques and Applications, 2006
Sabu Emmanuel, Mohan S. Kankanhalli
Digital video is usually compressed. The open compression standards from ISO/IEC, Moving Pictures Experts Group (MPEG) are MPEG-1, MPEG-2, and MPEG-4 [7]. The MPEG-1 standard is used for the video compact disk (VCD). The DVD, digital video broadcasts (digital TV transmissions), and high-definition television (HDTV) transmissions currently use the MPEG-2 standard. The MPEG-4 is a newer standard intended for low-bitrate (wireless, mobile video applications) and high-bitrate applications. There are other open standards such as H.261 and H.263 from ITU primarily for videoconferencing over telephone lines. Apart from the open standards, there are proprietary compression standards from Real Networks, Apple, Microsoft, and so forth. In addition to compression, the digital assets are to be wrapped in metadata to declare the structure and composition of the digital asset, to describe the contents, to identify the contents, and to express the digital rights. We discuss the declarations, descriptions, and identifiers in section “Content Declarations, Descriptors, and Identifiers.”
Flexible FPGA 1D DCT hardware architecture for HEVC
Published in Automatika, 2023
Hrvoje Mlinarić, Alen Duspara, Daniel Hofman, Josip Knezović
Standardization organizations ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) created the High-Efficiency Video Coding (HEVC) standard [1,2]. These well-known organizations, ITU-T and ISO/IEC, have developed and enhanced video coding standards over time. ITU-T developed H.261 [3] and H.263 [4], whereas ISO/IEC developed MPEG-1 [5] and MPEG-4 Visual [6]. Moreover, these two organizations worked together to develop the H.262/MPEG-2 Video [7] and H.264/MPEG-4 Advanced Video Coding (AVC) [8] standards. Prior to the HEVC initiative, the most recent video coding standard was H.264/MPEG-4 AVC, which was significantly expanded. H.264/MPEG-4 AVC has been instrumental in enabling digital video in numerous areas that H.262/MPEG-2 did not previously encompass. HEVC was created to address all existing H.264/MPEG-4 AVC applications, with a primary concentration on two issues: increased video resolution and increased utilization of parallel processing architectures.
An FPGA-friendly CABAC-encoding architecture with dataflow modelling programming
Published in The Imaging Science Journal, 2018
Dandan Ding, Fuchang Liu, Honggang Qi, Zhengwei Yao
Until now, there have been numerous video compression standards published by ITU-T VCEG [1] and ISO/IEC MPEG [2]. Among these standards, H.263 [3], MPEG-2 [4], H.264/MPEG-4 AVC [5] and H.265/HEVC [6] are well known and widely used. Nowadays, multimedia terminals such as smartphones, PDAs and TV set-top boxes are usually required to support multiple standards. It has been found that the most popular video standards are all under block-based hybrid coding framework and that they share several base coding modules: intra/inter prediction, transform, quantization and entropy coding [7]. To exploit the similarities or common components among the standards, a Reconfigurable Video Coding (RVC) framework [8] is established by MPEG. The key idea behind it is to decompose the fundamental algorithms into basic functional components to configure encoder or decoder flexibly. Under the RVC framework, a dataflow modelling scheme is employed to modulate the basic components for encoder or decoder configurations. The dataflow modelling principle is that individual components encapsulate their own state and thus do not share memory with one another [9]. In addition, the components are connected by virtual channels and they communicate with each other with tokens in the channels.