Explore chapters and articles related to this topic
Digital Image Compression
Published in Edward R. Dougherty, Digital Image Processing Methods, 2020
In lossless DPCM, the differential signal is encoded directly, often using a Huffman code or modified Huffman code tailored to the statistics of em. In lossy DPCM, the differential signal is quantized prior to encoding in order to reduce the bit rate at the expense of errors in the reconstructed image. The optimal quantizer structure depends on the subsequent encoding method; for fixed-rate applications, a Lloyd-Max MMSE quantizer (Section III. A. 1) tuned for the Laplacian distribution of em can be used, and for variable-rate applications that include entropy coding, a uniform quantizer (Section III.A.2) is more appropriate. A DPCM scheme using a 1-bit quantizer is also known as delta modulation. A block diagram for a lossy DPCM system is shown in Fig. 6. Notice how the quantizer is incorporated into the predictor loop at the transmitter, so that quantized, reconstructed pixel values, , are used in forming the predictions. This allows the transmitter and receiver to track each other, since only quantized values are available to the receiver.
Multimedia Data Compression
Published in Sreeparna Banerjee, Elements of Multimedia, 2019
Unlike entropy, source-coding methods incorporate the semantics of the data and can exploit spatial, temporal, and psychoacoustic models or psychovisual redundancies. The amount of compression depends on the data contents. Source coding can be both lossy or lossless, although most methods are lossy. This category includes predictive coding techniques like differential pulse code modulation (DPCM); transform coding techniques like fast Fourier transform (FFT) and discrete Fourier transform (DFT); layered coding techniques like sub-band coding; and quantization techniques, which are carried out above the Nyquist frequency. DPCM is a predictive coding technique used for compressing audio signals by exploiting temporal redundancy. In DPCM, the audio signal can be predicted from previous samples. Delta modulation (DM) is a modification of DPCM.
Critical Issues and Challenges
Published in H.R. Wu, K.R. Rao, Digital Video Image Quality and Perceptual Coding, 2017
What constitutes an efficient picture coding technique has a lot to do with certain assumptions which one has made, including the mind setting, available knowledge and technology, applications environment and performance criteria. For example, video signal was naturally considered as a 1-D continuous time signal [Goo51, Hua65, IL96] and as such the DPCM (Differential Pulse Code Modulation) [Cut52] as a signal compression technique was (and still is) more efficient than the PCM (Pulse Code Modulation) [Goo51, Hua65]. The performance criterion used was statistical (or source data) redundancy reduction as measured by the entropy [JN84]. After realizing redundancies not only exist between adjacent samples or pixels along the same scanning line but also along other dimensions, fast developments in picture coding led to techniques exploring picture data redundancies along the other spatial (i.e., vertical) axis and the temporal axis, which were quickly overtaken by multi-dimensional transform and hybrid coding techniques [Cla85].
A novel resolution independent gradient edge predictor for lossless compression of medical image sequences
Published in International Journal of Computers and Applications, 2021
Urvashi Sharma, Meenakshi Sood, Emjee Puthooran
Avudaiappan compared Huffman coding, Arithmetic coding, Lossless predictive coding, and other various Lossless image compression techniques. Lossless JPEG has been found better compression technique among these entire techniques w.r.t time and compression ratio for images [20]. Sunil and Sharanabasaweshwar presented a new approach by introducing convex-smoothing problems. According to this approach, input image is divided into sub-problems which are solved by using compressed sensing method. Proposed model’s performance is achieved in terms of PSNR and the time taken to perform the reconstruction [21]. Kabir and Monda compared edge-based transformation and entropy coding (ETEC) and prediction-based transformation and entropy coding (PTEC) schemes with the existing lossless compression techniques: ‘joint photographic experts group lossless’ (JPEG-LS), ‘set partitioning in hierarchical trees’ (SPIHT), and ‘differential pulse code modulation’ (DPCM). The ETEC and PTEC algorithms provide better compression than other schemes. PTEC is more suitable than ETEC for compression when both CR and computation time are taken into consideration [22].
Autonomous monitoring framework for resource-constrained environments
Published in Cyber-Physical Systems, 2018
Sajid Nazir, Hassan Hamdoun, Fabio Verdicchio, Gorry Fairhurst
Information-driven sensing architectures have been reported to benefit from on-board processing to reduce the need for communication and save energy [21]. A system exploiting temporal correlation to reduce the transmissions by not reporting the values in acceptable error range is described in [21,22]. Some schemes use adaptive sampling, but need to maintain a buffer of raw values at the sensor node to determine the sensor values to be transmitted. A data reduction scheme based on Differential Pulse Code Modulation (DPCM) is reported in [23]. These studies trade-off the high cost of sending raw values by applying on-board algorithms such as Fast Fourier Transform (FFT) [24] is close to our case study 2, and describes a system measuring temperature and exploits spatial and temporal correlation between sensed values. They define a data quality metric by dividing measurements into different regions and treating values on their closeness to a region’s boundary. Values around the boundaries are tracked for small changes and values far from the boundaries are not reported. The authors do not report significant energy savings for one node based on their algorithm although clustering and other cooperative transmission strategies are discussed. In [25], time series forecasting predicts future values to reduce sensor sampling and the number of messages transmitted. The sampling frequency is reduced once a sensor reports a constant, but increased if it changes. An energy saving of 87% is reported, but requires more sensing energy (due to complex sensors) compared to transmission energy and the gain is due to reduction in sensing rather than transmission energy.
An image compression model via adaptive vector quantization: hybrid optimization algorithm
Published in The Imaging Science Journal, 2020
Pratibha Pramod Chavan, B. Sheela Rani, M. Murugan, Pramod Chavan
In 2021, Malathkar et al. [21] has proposed an image compression scheme for wireless capsule endoscopy. Sub sampling, Golomb Rice coding, differential pulse code modulation, uniform quantization, and Corner clipping were all included in the proposed approach. Owing to the unique nature of endoscopic pictures, a simplified YUV colour space was devised, which yields excellent results. The performance of several quantization and subsampling approaches was evaluated, and the suggested compression algorithm provided a PSNR of 45.1and a compression ratio of 89.3%. In terms of CR and PSNR, the suggested approach outperformed a number of previously published techniques.