Explore chapters and articles related to this topic
Variable-Length Coding: Information Theory Results (II)
Published in Yun-Qing Shi, Huifang Sun, Image and Video Compression for Multimedia Engineering, 2019
The promising arithmetic coding algorithm, which is quite different from Huffman coding, is another focus of the chapter. While Huffman coding is a block-oriented coding technique, arithmetic coding is a stream-oriented coding technique. With improvements in implementation, arithmetic coding has gained increasing popularity. Both Huffman coding and arithmetic coding are included in the international still image coding standard Joint Photographic (image) Experts Group (JPEG). The adaptive arithmetic coding algorithms is adopted by the international bilevel image coding standard Joint Bi-level Image Experts Group (JBIG). Note that the material presented in this chapter can be viewed as a continuation of the information theory results presented in Chapter 1.
Variable-Length Coding: Information Theory Results (II)
Published in Yun Q. Shi, Huifang Sun, for Multimedia Engineering, 2017
The promising arithmetic coding algorithm, which is quite different from Huffman coding, is another focus of the chapter. While Huffman coding is a block-oriented coding technique, arithmetic coding is a stream-oriented coding technique. With improvements in implementation, arithmetic coding has gained increasing popularity. Both Huffman and arithmetic codings are included in the international still image coding standard JPEG (Joint Photographic (image) Experts Group coding). The adaptive arithmetic coding algorithms are adopted by the international bi-level image coding standard JBIG (Joint Bi-level Image Experts Group coding). Note that the material presented in this chapter can be viewed as a continuation of the information theory results presented in Chapter 1.
Part Review on Multimedia Security
Published in Ling Guan, Yifeng He, Sun-Yuan Kung, Multimedia Image and Video Processing, 2012
Alex C. Kot, Huijuan Yang, Hong Cao
To gain high security, public/private key encryption algorithms are incorporated for binary images authentication [34,36–38,57,58], for example, for binary images [34,36–38] and for halftone images [57,58]. Localization of tampering for binary images is discussed in [34], in which the accuracy is limited to the subimage size, for example, 128 × 128. How to counter against the “parity attack” is a key problem for the algorithms that employ the odd–even enforcement to embed one bit of data [36–38]. Chaining the blocks in the shuffled domain and embedding the image fingerprint computed in one block into the next block [36] help alleviate the attack. However, the last several blocks still suffer the “parity attack.” Watermarking on Joint Bi-level Image Experts Group 2 (JBIG2) text images [37] is done by embedding watermark in one of the instances, namely the data-bearing symbol using the pattern-based method proposed in [36]. Recently, a list of 3 × 3 patterns with symmetrical center pixels are employed to choose the data hiding locations in [38]. Flipping the center pixels in many patterns may break the “connectivity” between pixels or create an erosion and protrusion [38]. Hence, the visual quality of the watermarked image is difficult to control. In addition, employing the fixed 3 × 3 block scheme to partition the image leads to small embedding capacity. The random locations that are known both to the embedder and to the receiver are used to carry the MACs for halftone images [57,58]. Recently, the concept of using data hiding and public-key cryptography for secure authentication has been extended to an emerging data type, electronic inks [59]. A point insertion-based lossless embedding scheme is developed to secure the integrity of the electronic writing data and its context.
Binary medical image compression using the volumetric run-length approach
Published in The Imaging Science Journal, 2019
Erdoğan Aldemir, Gulay Tohumoglu, M. Alper Selver
Over the last decade, the lossless techniques have become increasingly significant in medical image compression [1,2] and they have been the subject of many studies [3,4]. There exist various state-of-the-art lossless algorithms providing the adequate results for grey and bi-level images [3,5]. The existing techniques employ various algorithm to attain satisfactory compression performance. The Joint Photographic Experts Group (JPEG) and JPEG-LS are the common standards utilize only intra-slice correlations to obtain the inter-pixel redundancy while segmenting data via Golomb codes and encoded via run length [3]. The general binary compression standards such as the Joint Bi-level Image Experts Group (JBIG) family [6], Octree and Context Adaptive Lossless Image Compression (CALIC) are currently in use for binary medical data compression [7,8]. The Run Length Encoding (RLE) is an algorithm employed effectively in the modern compression systems as a data to symbol mapping [7,9–11]. The study of Shan et al. [12] combines the oversampling with the RLE algorithm to further enhance the compression efficiency for telemetry data. An RLE-based algorithm attains high compression ratios in compressing the data having structure spreading homogeneously such as bi-level and grey level images that consist of uniform texture [13]. The JPEG-LS exploits the RLE under the condition of coding constant regions of the image under a context classification procedure [14]. To improve the performance of JPEG, an iterative algorithm is presented to jointly optimize run-length coding [15]. It is reported in [16] that the biased RLE showed notable compression performance and outperformed the JBIG2 and arithmetic coder.