Explore chapters and articles related to this topic
Introduction to Coding Theory
Published in R. Balakrishnan, Sriraman Sridharan, Discrete Mathematics, 2019
R. Balakrishnan, Sriraman Sridharan
Coding theory Coding theory has its origin in communication engineering. With Shannon’s seminal paper of 1948 [22], it has been greatly influenced by mathematics with a variety of mathematical techniques to tackle its problems. Algebraic coding theory uses a great deal of matrices, groups, rings, fields, vector spaces, algebraic number theory and, not to speak of, algebraic geometry. In algebraic coding, each message is regarded as a block of symbols taken from a finite alphabet. On most occasions, this alphabet is Z2={0,1}. Each message is then a finite string of 0s and 1s. For example, 00110111 is a message. Usually, the messages get transmitted through a communication channel. It is quite possible that such channels are subjected to noises, and consequently, the messages get changed. The purpose of an error correcting code is to add redundancy symbols to the message, based of course on some rule so that the original message could be retrieved even though it is garbled. Each message is also called a codeword and the set of codewords is a code.
Colour filter array demosaicking over compression through modified grey wolf optimization technique
Published in The Imaging Science Journal, 2018
M. S. Safna Asiq, W. R. Sam Emmanuel
The symbols and their weights (i.e. pixel values and the occurrence) are arranged in the sequential order bottom up. The pixel with the highest occurrence has the smallest code word and the pixels with the least occurrence have the largest code word. The redundancy is reduced by merging the probabilities of the pixels having the lowest occurrence iteratively until only two probabilities are left and the goal is achieved. The code words are mapped to the pixel values to obtain the compressed image after applying the biorthogonal wavelet transform. An entropy encoder is used to obtain the optimal code for the pixel and their occurrences as in Equation (12) (Figure 6).where is the occurrence of the pixel value and is the probability of . An inverse wavelet transform is applied over the compressed image to obtain the decompressed image. The image is reconstructed back to its original form as a decompressed image for leading to high-order demosaicking.
Distributed polar-coded OFDM based on Plotkin’s construction for half duplex wireless communication
Published in International Journal of Electronics, 2018
Rahim Umar, Fengfan Yang, Shoaib Mughal, HongJun Xu
where , are two linear block codes of same length such as , ‘’ is a modulo 2 sum operation. The resulted code word has a dimension , code word length , code rate and minimum hamming distance. The most interesting point regarding code word is the fact that it can also be obtained using the following generator matrix
An erasure-based scheme for reduction of PAPR in spatial multiplexing MIMO-OFDM using Reed-Solomon codes over GF(216 + 1)
Published in International Journal of Electronics, 2018
Mohammad Reza Motazedi, Reza Dianat
We assume that the coded OFDM-based system has subcarriers, each subcarrier is modulated by QPSK and all subcarriers are active. The OFDM signals are oversampled by a factor of . The inner convolutional code and the QPSK constellation are the same as the ones used in Motazedi and Dianat (2017), and as shown there and in Fischer and Siegl (2009), our simulations also confirm that using other constellation schemes almost results in the same PAPR reduction performance. Except symbol ‘65536’, all other symbols over GF() can be represented using 16 bits. For the proposed schemes, each time the symbol ‘65536’ occurs in a code word, the code word is dropped and another one is generated instead as discussed in Motazedi and Dianat (2017). The undesirable symbol ‘65536’ appears with the probability of 1/65537. In the case of occurrence of this symbol in the original code word, the price is just adding a constant value to all of the coded symbols of the original code word and a small loss of bandwidth efficiency (two most significant bits of the information block are assigned for creation of a desirable code word). The computational burden is little. For instance, the longest code word used in this work has the length of 512 symbols. It means that 128 generated code words have symbols. On average, symbol ‘65536’ appears just once in these 128 code words. Therefore, one code word needs modification, and the probability of modification for a code word is 1/128. The number of additions in one modification is 512. Therefore, the average number of required additions is additions per code word.