Explore chapters and articles related to this topic
Basics of Image Processing
Published in Maheshkumar H. Kolekar, Intelligent Video Surveillance Systems, 2018
Image compression is used to reduce the amount of memory required to store a digital image and transmit it at low bandwidth. It is used in image transmission applications which include broadcast television, remote sensing via satellite, and other long-distance communication systems. Image storage is required for several purposes, such as document, medical images, magnetic resonance imaging (MRI), radiology, and motion pictures. Image-compression algorithms are divided into lossy or lossless. In lossless compression, information content is not modified. Compression is achieved by reducing redundancy and this is used for archival purposes, as well as for medical imaging and legal documents. In lossy compression, information content is reduced and it is not recoverable. Lossy compression methods are epecially used at low bit rates, which introduce compression artifacts. Lossy methods are especially used for natural images such as photographs. Figure 1.4 shows the original image and JPEG compressioned image with compression ratio 27 : 1. Though image 1.4 (b) is compressed 27 times the size of the original image, its quality is sufficiently good.
Squeezing through the pipe: digital compression
Published in Jonathan Higgins, Satellite Newsgathering, 2012
When pushed to the limit, compression codec processes can produce visual defects, widely referred to as ‘compression artifacts’, mostly caused by some form of quantization error. These include a ‘pixellation’ effect commonly referred to as ‘blockiness’ or ‘mosaicing’, momentarily showing as rectangular areas of picture with distinct boundaries. The visible blocks may be 8 × 8 DCT blocks or misplaced macroblocks, perhaps due to the failure of motion prediction or because of transmission path problems. Other artifacts may be seen near the edges of objects, such as ‘blurring’ caused by reducing horizontal and/or vertical resolution. This is because an edge – which is a sharp transition – can generate frequencies outside the frequency range of the system. To counter this, a process called ‘anti-aliasing’ may be applied, which removes data at too high a frequency to be reproduced. If such data is left in a signal, it generates artifacts, but if anti-aliasing is applied too aggressively, this can also create artifacts.
ATSC Digital Television
Published in Skip Pizzi, Graham A. Jones, A Broadcast Engineering Tutorial for Non-Engineers, 2014
If insufficient bits are used to encode the video, the resulting picture exhibits degradations known as compression artifacts. The most noticeable are pixelation and blocking. When this happens, instead of natural-looking pictures, the picture breaks up instantaneously into small or larger rectangles with hard edges, either in particular areas or all over the screen. This may happen continuously or just at particular difficult events such as a dissolve, when every video pixel is changing. The AVC codec employs “deblocking filters” which help to reduce the visibility of blocking artifacts when insufficient bits are available.
Adaptive deblocking technique based on separate modes for removing compression effects in JPEG coded images
Published in International Journal of Computers and Applications, 2021
Amanpreet Kaur, Jagroop Singh Sidhu, Jaskarn Singh Bhullar
To remove compression artifacts, various techniques have been designed in the last two decades including in-loop and post-processing filtering methods [4]. Loop filtering is adopted in H.264/AVC and could enhance coding efficiently by removing blocking artifacts. However, in loop filtering techniques cannot deal with corner outliers effectively because these techniques consist of one dimensional (1-D) filtering approach [5–6]. Post-processing deblocking filtering techniques are pervasive and more flexible compared to in-loop filtering methods. Because these techniques improve the perceptual quality of reconstructed image without any modification in encoding or decoding mechanisms. These techniques can be classified into four different categories such as POCS (Projection on Convex set) [7–10], Estimation theoretic based methods [11–14]; Wavelet-based methods [15–20] and frequency domain analysis methods [21–28]. Wijewardhana and Codreanu [29] proposed a post-processing free recovery technique which is used to lapped transforms based on sparse image representation in order to remove compression artifacts from decompressed images. Artifacts detection technique for image forensics has been designed in [30]. Dalmia and Okade [31] designed a method for first quantization matrix estimation for non-aligned double JPEG compressed images. The POCS based algorithms are recursive in nature which require high computational burdened from DCT/IDCT in every iterative step. Therefore, due to the high complexity, algorithms become more and more complex and generated worse performance which is not acceptable. Estimation theoretical observation methods are also iterative in nature. Therefore, these approaches are less applicable in real-time applications. However, spatial domain techniques are executed slower than transform domain techniques. The disadvantage of spatial domain methods is smoothing or over smoothing of an image due to its low pass filtering feature. In addition, the above post-processing filtering techniques are used different strategies on various areas to keep edges from over smoothing and over-blurring but they still produce discontinuousness near the block boundaries. Corner outliers near the diagonal edges could not efficiently be removed by existing techniques because they consist of 1-D filtering scheme. Therefore, some compression artifacts are bypassed or insufficiently filtered. While deblocking filtering methods process edges near 8 × 8 grids, but corner outliers exist on the 4 × 4 grid could not be processed. With the higher compression ratio, the decoded image produces annoying compression artifacts near block boundaries, ringing artifacts near original edges and corner outliers at block corner because many blocks lost their details. Then the edges of these blocks may be determined as natural edges and would not be deblocked by 1-D filtering methods.