Explore chapters and articles related to this topic
V
Published in Phillip A. Laplante, Dictionary of Computer Science, Engineering, and Technology, 2017
video coding compression of moving images. Coding can be done purely on an intraframe (within-frame) basis, using a still image coding algorithm, or by exploiting temporal correlations between frames (interframe coding). In the latter case, the encoder estimates motion between the current frame and a previously coded reference frame, encodes a field of motion vectors that describe the motion compactly, generates a motion-compensated prediction image and codes the difference between this and the actual frame with an intraframe residue coder—typically the 8 × 8 discrete cosine transform. The decoder receives the motion vectors and encoded residue, constructs the prediction picture from its stored reference frame, and adds back the difference information to recover the frame. See also MPEG.
A Review of A/V Basics
Published in Al Kovalick, Video Systems in an IT Environment, 2013
In general, there are two main classes of lossy compression: intraframe and interframe coding. Intraframe coding processes each frame of video as standalone and independent from past or future frames. This allows single frames to be edited, spliced, manipulated, and accessed without reference to adjacent frames. It is often used for production and videotape (e.g., DV) formats. However, interframe coding relies on exploiting temporal redundancies between frames to reduce the overall bit rate. Coding frame # N is done more efficiently using information from neighboring frames. Utilizing frame redundancies can squeeze a factor of two to three better compression compared to intra-only coding. For this reason, the DVD video format and ATSC/DVB transmission systems rely on intraframe coding compression. As might be imagined, editing and splicing interframes are thorny problems due to their interdependencies. Think of interformats as offering more quality per bit than intraformats.
Squeezing through the pipe: digital compression
Published in Jonathan Higgins, Satellite Newsgathering, 2012
The next form of compression would be simply looking for redundancy within each single frame and sending such frames in a continuous sequence. Two kinds of redundancy, spectral and spatial, can be handled inside each frame without reference to any other frame. Compression techniques based on these types of redundancy are therefore called ‘intra-frame coding’. Intra-frame coding techniques can be applied to a single frame of a moving picture (or to a single still image).
Flexible FPGA 1D DCT hardware architecture for HEVC
Published in Automatika, 2023
Hrvoje Mlinarić, Alen Duspara, Daniel Hofman, Josip Knezović
The HEVC video coding layer employs a hybrid strategy that has been utilized by all video compression standards since H.261 was released. This method predicts data spatially from one region to the next within the same frame without requiring previous or subsequent frame. The initial frame of a video sequence and each random access point are encoded with intra-frame prediction alone. The frame is divided into block-shaped regions, and the decoder is informed of the precise block partitioning. Inter-frame temporally predictive coding modes are utilized for all remaining frames of a sequence between two intra-frame. The encoder calculates motion data consisting of the designated reference image and motion vector (MV) for use in predicting the samples of each block. The residual signal of intra-frame or inter-frame prediction is then subjected to a linear spatial transform. Scaled, quantized, and entropy-encoded transform coefficients are transmitted alongside.
A total variation and group sparsity-based algorithm for nuclear radiation-contaminated video restoration
Published in The Imaging Science Journal, 2021
Mingju Chen, Hua Zhang, Liuman Lu, Hao Wu
In this paper, a novel radiation-contaminated image deblurring scheme via a two-stage strategy exploring intra-frame and inter-frame correlation is proposed. In view of the fact that the radiation spots may lead to inaccurate similar patch search, the TV spectrum theory is implemented in the first intra-frame stage to locate the spot areas. The TV method is then utilized to repair the spot areas by using the local structural similarity and geometric properties of a single frame. In this stage, a primary processed deblurring result is obtained and its spot areas have been effectively repaired. Therefore, the relevant patches in the intra-frame and inter-frame are exploited to further improve the first stage enhancement by three steps: retrieve correlated frames, similar patch selecting, and group sparse representation. By combining the two stages, the final enhancing result can be achieved. The experiments on synthetic and real radiation-contaminated images demonstrate that the proposed method significantly outperforms other related methods in both objective quality measurement and subjective visual evaluation.
An image compression model via adaptive vector quantization: hybrid optimization algorithm
Published in The Imaging Science Journal, 2020
Pratibha Pramod Chavan, B. Sheela Rani, M. Murugan, Pramod Chavan
Lossy and lossless compression approaches both compressed the images via minimizing the image redundancy using several fundamental approaches listed below. (1) Minimize pixel correlation: The correlation among one pixel and its neighbors might be extremely strong [7]. Researchers may use variable length coding theory and statistical features to lower the quantity of storage after the correlation between the pixels is minimized. (2) Quantization: Quantization decreases the storage quantity by mapping larger values to a simple quantum value. Moreover, the quantization is a lossy procedure that causes irreversible deformation. (3) entropy coding is to reduce the image size for storage purposes [8,9]. According to the probability of the symbols, entropy coding provides codewords to the appropriate symbols. For decreasing the duplication, traditional image encoding formats including such JPEG, HEVC intra-frame encoding, and JPEG2000are used a pre-defined handmade transform, like DCT or DWT. After that, entropy coding and quantization are conducted to get a condensed bit stream [10,11].