Explore chapters and articles related to this topic
Technology Beyond the Standard
Published in Klaus Diepold, Sebastian Moeritz, Understanding MPEG-4, 2012
Klaus Diepold, Sebastian Moeritz
SVG: SVG, Scalable Vector Graphics, defines an XML-based language for describing two-dimensional vector graphics and mixed vector/raster graphics. SVG comprises three types of graphical objects: vector graphic shapes, e.g., paths consisting of straight lines and curves (Be´zier or elliptical arcs); images; and text. In addition, graphical objects can be grouped, styled, transformed, and composited into previously rendered objects. SVG drawings can be made interactive (simple interaction based on pointing devices) and dynamic (using deterministic animations). Animations can be defined and triggered by embedding SVG animation elements in SVG content or via scripting. In both cases, the animation is entirely contained in the SVG document. SVG lacks mechanisms for updating a presentation dynamically as well as means for adapting the content to streaming. W3C has proposed a recommendation annex that explains how to use a lossless general-purpose tool (e.g., gzip) to compress description files. But such compression tools are not appropriate to support streaming of media content. More recently, SVG is specifying some profiles targeted to mobile terminals.
Deep Learning
Published in Seyedeh Leili Mirtaheri, Reza Shahbazian, Machine Learning Theory to Applications, 2022
Seyedeh Leili Mirtaheri, Reza Shahbazian
The autoecoders compress the input into a lower dimensional features and then reconstruct the output from this representation. The feature is a compact summary or compression of the input, also called the latent-space representation. An autoencoder consists of 3 components which are encoder, code and decoder. The encoder compresses the input and produces the code, the decoder then reconstructs the input only using this code. Therefore, to build an autoencoder you need an encoding method, a decoding method, and a loss function to compare the output with the target. Autoencoders are mainly a dimensionality reduction or compression algorithm with a couple of important properties. The first property is that they are data specific. In other words, the autoencoders are only able to meaningfully compress data similar to what they have been trained on. Since they learn features specific for the given training data, they are different than a standard data compression algorithm like gzip. Therefore, you cannot expect an autoencoder trained on handwritten digits to compress landscape photos. The second property of autoencoders is that they are lossy. The output of the autoencoder will not be exactly the same as the input, it will be a close but degraded representation. If you want lossless compression they are not the way to go. Remember that these networks are classify to unsupervised category of machine learning algorithm. To train an autoencoder you do not need to do anything fancy and just throw the raw input data at it. Autoencoders are considered an unsupervised learning technique since they don’t need explicit labels to train on. But to be more precise they are self-supervised because they generate their own labels from the training data.
Image Compression
Published in Scott E. Umbaugh, Digital Image Processing and Analysis, 2017
For the GIF and TIFF image file formats, the LZW algorithm was specified, but there was some controversy over this, since the algorithm was patented. However, the patent expired in 2003. Before 2003, since these image formats were widely used, other methods similar in nature to the LZW algorithm were developed to be used with these image file formats. Similar versions of this algorithm include the adaptive Lempel-Ziv, used in the UNIX compress function, and the Lempel-Ziv 77 algorithm used in the UNIX gzip function.
SARTRES: a semi-autonomous robot teleoperation environment for surgery
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
Md Masudur Rahman, Mythra V. Balakuntala, Glebys Gonzalez, Mridul Agarwal, Upinder Kaur, Vishnunandan L. N. Venkatesh, Natalia Sanchez-Tamayo, Yexiang Xue, Richard M. Voyles, Vaneet Aggarwal, Juan Wachs
Since the sequence is not generated by a memory less source, direct calculation of is non-trivial. From Shannon Source Coding Theorem, we note that, for large sequences, an efficient compression algorithm compresses the sequence to the optimal number of bits required which is equal to the entropy of the sequence (Cover and Thomas 2012). Hence, we measure the entropy using the data required to efficiently store . We use DEFLATE algorithm (Larsson 1996) to compress our data, further, we use gzip implementation for using DEFLATE algorithm.
A feature-based intelligent deduplication compression system with extreme resemblance detection
Published in Connection Science, 2021
Xiaotong Wu, Jiaquan Gao, Genlin Ji, Taotao Wu, Yuan Tian, Najla Al-Nabhan
By the duplicate and resemblance detection, a coming chunk is evaluated as a unique, duplicate, or similar chunk. As illustrated in Section 3.2, different cases have different compression, including delta and deep compression. Delta compression (e.g. Xdelta (MacDonald, 2000), Zdelta (Xia et al., 2014)) utilises an existing chunk to compress a new chunk and only stores the different part of the latter. Deep compression (e.g. LZW (Nelson, 1989), GZIP (Gailly & Adler, n.d.)) utilises the traditional compression approaches to compress a chunk. It is noted that both compression and decompression are lossless, which implies that it does not decrease the quality of data.
Dynamic Deep Genomics Sequence Encoder for Managed File Transfer
Published in IETE Journal of Research, 2022
Amandeep Kaur, Ajay Pal Singh Chauhan, Ashwani Kumar Aggarwal
The performance of MFT, HTTP, and FTP is evaluated in terms of size and time taken for data transfer using standard encoding scheme and proposed data encoding scheme with and without GNU zip (GZIP). The genomics dataset used is a text-based FASTA file containing either protein sequence or nucleotide sequence in genomics data. This work is validated on the data as given in section (2.2.2) and a dataset generated by varying the frequency of occurrence of genomic symbols.