Explore chapters and articles related to this topic
Hardware Based Data Compression using Lempel-Ziv-Welch Algorithm
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
Onkar Choudhari, Marisha Chopade, Sourabh Chopde, Vaishali Ingle
Data compression is the art of deducting the amount of bits required to store or transmit the given information, using encoding techniques. The primary aim of data compression is to eliminate redundancy. The design of data compression algorithms involves trade-offs among factors like, the compression ratio, amount of distortion introduced and the computational resources required to compress and decompress the data[1]. Lossy and lossless are the two ways in which compression can take place. The amount of data is reduced in lossless compression by identifying and removing statistical redundancy. No information is lost in lossless compression. For text compression lossless compression techniques are used. Lossy compression reduces data by eliminating unnecessary or less important information. Lossy compression is typically used for images[2] and audio where a little bit of loss in resolution is often undetectable and acceptable.
Steganography and Medical Data Security
Published in S. Ramakrishnan, Cryptographic and Information Security, 2018
There are two major problems about medical images in DICOM file format, which are DICOM images size, and patient personal data security in DICOM header. Each DICOM image is a file ranging in size from 1MB to100MB. The amount of disk space required to store medical images increases between 10% and 20% each year [9]. For this reason, compression algorithms are used to reduce the size of medical images that occupy petabyte space within the PACS. There are two major categories of compression algorithms which are lossy and lossless. Lossless compression method allows the original data to be retrieved from the compressed data and its compression ratio is 2:1 or 3:1. Lossy compression method cannot recover the original data. The compression rate of lossy can be 10:1 to 50:1 and even more. However, radiological diagnosis can be affected by losses in medical images that compressed with lossy compression. For this reason, lossless compression methods such as run length coding (RLE), Huffman, Lempel-Ziv-Welch (LZW) and JPEG 2000 are preferred for reducing image size [1,9].
Development of Steganography and Steganalysis
Published in Shivendra Shivani, Suneeta Agarwal, Jasjit S. Suri, Handbook of Image-Based Security Techniques, 2018
Shivendra Shivani, Suneeta Agarwal, Jasjit S. Suri
The secret message is encoded into its compressed version before embedding into the cover image. As we already know that compression methods can be broadly classified into two types: lossy compression and lossless compression as shown in Figure 12.3. In lossy compression methods, we cannot exactly recover the original data after decompression. They are generally used for image and video compression where some loss of information can be ignored by our human visual system. But this loss of data is not desirable if our target data is textual information, because a change in any letter can change the entire meaning of the secret. Hence in the case of textual data, which is the general form of a secret message in the case of steganography, we use lossless compression methods. Thus, at the receiver end, we exactly recover the same information as it was before compression. Lossless compression of a secret message may be divided into two types: fixed length encoding and variable length encoding.
Realization of RFID-Based DAS Using an RF Transceiver and Huffman Coding
Published in IETE Journal of Research, 2020
Soumen Ghosh, Palash Kumar Kundu
The task of compression consists of two components, an algorithm that takes a message and generates a “compressed” representation is called encoding, and an algorithm that reconstructs the original message or some approximation of it from the compressed representation is called decoding. The data compression is classified as lossy and lossless compression algorithm [15]. The lossy compression reduces the file size by eliminating unnecessary data not recognized by a human being after decoding. Lossless compression handles each bit of data inside the file to reduce the size without losing any data after decoding [16]. In data acquisition, sensor output is collected by the tag and sent to the reader and stored in the memory of the computer. It creates a problem of storage space because large quqntities of data are received by the tag. The compression technique plays a vital role here. The RFID data compression problem is to change the structure of the input stream into an output stream with a reduced data size but with no loss of information [6]. The Huffman coding is a lossless compression technique. Huffman AlgorithmC is set of n character. Q is minimum priority queue. Step 1: n ← |C| Step 2: Q ← C Step 3: for i 1 to n - 1 Step 4: do allocate a new node z Step 5: left[z] ← x ← EXTRACT-MIN (Q) Step 6: right[z] ← y ← EXTRACT-MIN (Q) Step 7: f [z] ← f [x] + f [y] Step 8: INSERT(Q, z) Step 9: return EXTRACT-MIN(Q)