Explore chapters and articles related to this topic
Communication Optimization and Edge Analytics for Smart Water Grids
Published in Panagiotis Tsakalides, Athanasia Panousopoulou, Grigorios Tsagkatakis, Luis Montestruque, Smart Water Grids, 2018
Sokratis Kartakis, Julie A. McCann
Based on Figure 3.15, the quantitative results of memory consumption per algorithm are the following: PPMd consumes more memory than the other algorithms (10MB), bzip2 [11] requires around 9MB of memory and data segments of 100KB, zlib requires 120 KB, for LZMA [2] the lowest initial configuration values requires 6.5KB and the minimum dictionary size is 4096 B. In total, during the execution, the minimum memory usage can be 30KB (the decompression process 1/10 of the compression memory usage). LZ4 requires at least 16 kB of memory, miniLZO [39] requires memory only during the compression (dictionary creation) and the minimum memory requirement is 8192B, and the minimum required memory for S-LZW-MC [41] is 3.25KB. This algorithm is an embedded version of LZW, therefore further evaluation is not required.
A comprehensive optimization strategy for real-time spatial feature sharing and visual analytics in cyberinfrastructure
Published in International Journal of Digital Earth, 2019
Text data compression itself is a very active research topic. Classic data compression algorithms include: Run-length encoding (RLE; Robinson and Cherry 1967), Burrows–Wheeler transform (Burrows and Wheeler 1994), Huffman coding (Huffman 1952), Prediction by partial matching (PPM; Cleary and Witten 1984), LZ77 (Ziv and Lempel 1977), LZ78 (Ziv and Lempel 1978), etc. Currently, there are dozens of available data compression methods and toolkits derived from these algorithms. In consideration of the requirements for data interoperability and performance optimization, the target data compression methods for WFS should possess the characteristics of (1) robust and well performed in terms of compression speed and compression ratio; (2) widely adopted; (3) have available software development kits (SDK) for both server and client sides integration. The DEFLATE (Deutsch 1996) and LZMA (Lempel–Ziv–Markov chain; Pavlov 2007) algorithms are selected for integration and testing in this research as both are widely adopted. The DEFLATE algorithm is a combination of LZ77 and Huffman encoding. While the LZMA algorithm is a derivation of LZ77. Generally, the DEFLATE method compresses files faster than LZMA, but the generated files have less compression ratio (Li et al. 2015).
A Secured Healthcare Management and Service Retrieval for Society Over Apache Spark Hadoop Environment
Published in IETE Journal of Research, 2023
The objective of this compression is to preserve the original data format and structure. LZMA is expanded by the Lempel-Ziv-Markov Chain algorithm. It is a lossless compression technology; however, it can be used in hardware implementation applications. We are the first in proposing LZMA for data compression and it’s fully functioned for healthcare application prospects. In this paper, we aim to provide useful healthcare applications for all text files irrespective of the size of the files. For this purpose, the dividing the input files into a number of fixed-size blocks and process these blocks independently. We make Temporary Dictionary which is time-consuming and will cause overheads while copying the data to and from device to memory. Hence, we have avoided this dictionary structure. Simply speaking, we are using the input file as a “dictionary” and finding redundancies and eliminating them from a block. Range encoder is used after the removal of the redundant copy of the text or a file. In addition, the common issue will arise in data compression technique is to important data will be compressed. To mitigate the above mentioned problems, we presented this compression technique. It is processed in Data Nodes deployed on HDFS environment. Typically, HDFS comprised of Name node (Master) and Data nodes (Slave) where master node consisting directories of various attributes such as name, size, permissions, and last modified time while slave nodes consist compressed data. Master node maps a file to the list of blocks and the blocks to the list data nodes that store them. Slave nodes report to the master node periodically through heartbeat messages that hold information about the blocks that are stored. Hence the master node builds its master data of reports of this block and always stays in synchronization with data nodes in a cluster. The components involved in this compression technique are compressed write buffer, packet modifier, compressed packet detector/reader, and CPU/memory monitoring. The data packet format is given in Figures 5–11.