Explore chapters and articles related to this topic
Variable-Length Coding: Information Theory Results (II)
Published in Yun-Qing Shi, Huifang Sun, Image and Video Compression for Multimedia Engineering, 2019
Compared with Huffman coding, arithmetic coding is quite different. Basically, Huffman coding converts each source symbol into a fixed codeword with an integral number of bits, while arithmetic coding converts a source symbol string to a code symbol string. To encode the same source symbol string, Huffman coding can be implemented in two different ways. One way is shown in Example 5.9. We construct a fixed codeword for each source symbol. Since Huffman coding is instantaneous, we can cascade the corresponding codewords to form the output, a 17-bit code string 00.101.11.1001.1000.01, where, for easy reading, the five periods are used to indicate different codewords. As we see that for the same source symbol string, the final subinterval obtained by using arithmetic coding is (0.1058175, 0.1058250). It is noted that a decimal in binary number system, 0.000110111111111, which is of 15 bits, is equal to the decimal in decimal number system, 0.1058211962, which falls into the final subinterval representing the string s1s2s3s4s5s6. This indicates that, for this example, arithmetic coding is more efficient than Huffman coding.
Variable-Length Coding: Information Theory Results (II)
Published in Yun Q. Shi, Huifang Sun, for Multimedia Engineering, 2017
Compared with Huffman coding, arithmetic coding is quite different. Basically, Huffman coding converts each source symbol into a fixed code word with an integral number of bits, whereas arithmetic coding converts a source symbol string to a code symbol string. To encode the same source symbol string, Huffman coding can be implemented in two different ways. One way is shown in Example 5.9. We construct a fixed code word for each source symbol. Since Huffman coding is instantaneous, we can cascade the corresponding code words to form the output, a 17-bit code string 00.101.11.1001.1000.01, where, for easy reading, the five periods are used to indicate different code words. As we see that for the same source symbol string, the final subinterval obtained by using arithmetic coding is [0.1058175, 0.1058250). It is noted that a decimal in binary number system, 0.000110111111111, which is of 15 bits, is equal to the decimal in decimal number system, 0.1058211962, which falls into the final subinterval representing the string s1s2s3s4s5s6. This indicates that, for this example, arithmetic coding is more efficient than Huffamn coding.
Signal Information, Coding, and Compression
Published in Samuel D. Stearns, Don R. Hush, ®, 2016
Samuel D. Stearns, Don R. Hush
The next type of entropy coding we discuss is called arithmetic coding.5,12,14,15 Arithmetic coding differs from Huffman coding in that the encoded version of the signal vector or array does not consist of a sequence of codes for individual signal elements (symbols). In arithmetic coding, the signal is encoded by processing the symbols one at a time in order, and the end result is a single long binary fraction rather than a sequence of symbol codes.
A lossless compression method for logging data while drilling
Published in Systems Science & Control Engineering, 2021
Shan Song, Taiwei Lian, Wei Liu, Zhengbing Zhang, Mingzhang Luo, Aiping Wu
It can be seen from the Figure 8 that all the compression methods using grouping predictive coding can achieve a higher compression ratio, especially, the proposed method, i.e. FPHC with group of frames, has achieved the best compression effect. Although the compression ratio is only slightly higher than that of adaptive Huffman coding and adaptive arithmetic coding, the proposed method has the advantages of simple implementation and better error resilience. Therefore, the proposed method will be more suitable for real-time logging engineering.