Explore chapters and articles related to this topic
Video Compression
Published in Jerry D. Gibson, Mobile Communications Handbook, 2017
Do-Kyoung Kwon, Madhukar Budagavi, Vivienne Sze, Woo-Shik Kim
Two context-adaptive entropy coding methods are supported: context-adaptive variable length coding (CAVLC) and context-adaptive binary arithmetic coding (CABAC) [15]. CAVLC is applied only to the coding of transform coefficients while normal VLC is used for the coding of other syntax elements. CAVLC encodes transform coefficients using VLC tables that adapt based on the number of nonzero coefficients of neighboring blocks. CABAC is applied to the coding of all the MB-level syntax elements, including MB header and transform coefficients. CABAC is the combination of binary arithmetic coding technique with context modeling, which consists of three operations of binarization, context modeling, and binary arithmetic coding. A syntax element is first converted to binary symbols called bin string and each bin is arithmetic coded with an updated context model. The context modeling is the key to high coding efficiency of CABAC. Based on the statistics of previously coded syntax elements, the probability model of binary symbols is updated and used to encode them. CABAC improves coding efficiency by around 9–14% over CAVLC at the cost of increased complexity.
Data Hiding in Compressed Images and Videos
Published in S. Ramakrishnan, Cryptographic and Information Security, 2018
In Figure 14.11, there exist many components. Among them, the intra-frame prediction, motion estimation, transform/quantization, de-blocking filter and entropy coding are the most important components. The intra-frame prediction just likes the prediction in lossless mode in JPEG. The distinction is that the number of predictive mode is much larger than modes in JPEG. Different predictive modes only affect the accuracy of predictive value hence it only leads to slightly increase the size of compressed bit stream. Thus data hiding schemes prefer to hide bits by modifying the optimal predictive mode to sub-optimal predictive mode. The second component is the motion estimation which finds the best matching in the coded frame for the current block. The displacement in horizontal and vertical direction is called as motion vector. Like the intra-prediction, motion vector is another component which is often used to hide bits. The third component is the transform and quantization. It is similar to the corresponding one in JPEG. Thus, it is also often used to hide bits. The last component is the entropy coding. The most common two kinds of entropy coding are CAVLC (Context-based Adaptive Variable Length Coding) and CABAC (Context-based Adaptive Binary Arithmetic Coding). It is noted that CAVLC and CABAC are supported in H.264, however, CAVLC is removed from the latest standard H.265/HEVC (High Efficiency Video Coding). In data hiding, researchers often hide bits by equivalent partitions of entropy coding tables in compressed video and audio. In addition, video coding has a common organization structure. It is shown in Figure 14.12.
AVC
Published in Yun Q. Shi, Huifang Sun, for Multimedia Engineering, 2017
Further, the CAVLC is simpler than CABAC, it becomes the baseline entropy coding for H.264/AVC. In the CAVLC scheme, inter-symbol redundancies are used by switching VLC tables for different syntax components depending on the history of transmitted coding symbols. The basic coding tool in the CAVLC is the Exp-Golomb codes (Exponential Golomb codes). The Exp-Golomb codes are VLCs, which consist of a prefix part (1, 01, 001, . . . ), and a suffix part that is a set of bits (x0, x1x0, x2x1x0, . . . ) where xi is a binary bit. The code word structure is represented in Tables 20.1 through 20.3.
A New Low Complexity Bit-truncation Based Motion Estimation and Its Efficient VLSI Architecture
Published in IETE Journal of Research, 2021
Sravan K. Vittapu, Souvik Kundu, Sumit K. Chatterjee
The main objective of video coding is to reduce the amount of data present in the video sequence without degrading its visual quality. From the time video encoding was developed, Motion Estimation (ME) techniques have been applied to reduce the temporal redundancies from the video sequences. Entropy coding techniques such as context-adaptive variable-length coding and context-adaptive binary arithmetic coding have been used to remove statistical redundancies. It should, however, be noted that the ME part is generally the most intricate as well as the most power consuming part of the video encoder [3]. To reduce the overall computational complexity of the video encoder, many ingenious ME methods have been suggested in the literature. However, hardware implementations of these ME methods are few and far between.