Explore chapters and articles related to this topic
Image Compression
Published in Vipin Tyagi, Understanding Digital Image Processing, 2018
In recent years, the storage and transmission facilities have grown rapidly, and a large amount of digital image data can be stored and transmitted easily. However, with the advancement of storage capacity and transmission bandwidth, digital image data is also growing exponentially. Thanks to the new High Definition (HD) technology being used in the field of digital images. For example, a grayscale 8-bit image of 512 × 512 pixels, contains 256 kilobytes data. If color information is also added, then the data size increases three times, and HD technology further increases the demand of storage to gigabytes. To store a video having 25 frames per second, even one second of color film requires approximately 19 megabytes of memory, thus, a typical storage medium of size 512 MB can store only about 26 seconds of film. This storage requirement of image or video files is very large, therefore, the file size needs to be reduced using compression.
Video Production and Post-Production
Published in Lionel Felix, Damien Stolarz, Jennifer Jurick, Hands-On Guide to Video Blogging and Podcasting, 2013
Lionel Felix, Damien Stolarz, Jennifer Jurick
What does all this file and volume size stuff mean in terms of video capture? If you have a volume formatted in FAT32, 4 GB will be the file size limit the system can support. With DV video capture working out to roughly 13 GB per hour, it’s not a lot to work with. On NTFS file systems, your DV tape will end long before the file gets anywhere near a fraction of the limit.
Big data utilisation and its effect on supply chain resilience in Emirati companies
Published in International Journal of Logistics Research and Applications, 2023
Ioannis Manikas, Balan Sundarakani, Mohamed Shehabeldin
Big Data refers to large amounts of data that are collected daily by a business and not easily stored or processed using traditional tools (Sanders 2014). The size of Big Data (file size) is not their only feature. There are other factors such as variety and the speed with which these data are currently produced (velocity), which companies must take into account. Variety refers to the various sources from which Big Data currently originate and the various forms that it may have (Gärtner and Hiebl 2017). According to Hurwitz et al. (2013) and Hofmann (2017) Big Data describe these data sets which have the following characteristics: High volumeWide variety from different sources and in different formats (Variety)Fast creation, replication and dissemination (Velocity)
From μCT data to CFD: an open-source workflow for engineering applications
Published in Engineering Applications of Computational Fluid Mechanics, 2022
Kevin Kuhlmann, Christoph Sinn, Judith Marie Undine Siebert, Gregor Wehinger, Jorg Thöming, Georg R. Pesch
The post processing of the CT data (step I) is done using the open-source software ImageJ Fiji (https://imagej.net/software/fiji/). Firstly, the image stacks are cropped to fit the region of interest and a median filter (radius 3 px) is applied. This helps to get rid of artifacts and noise in the images and is beneficial for the upcoming edge detection (Bovik et al., 1987; Reddy et al., 2017). Thereafter, the images are binned with a value of 0.5 in every 3 directions, so that only half of the pixels are left in each direction. Consequently, the voxel size is doubled and the data is reduced significantly (factor 1/8). This is an optional step and reduces both the file size as well as the resolution of the data. However, in the case of the OCFs binning is necessary to be able to process the data further. For the sphere and the POCS, both the binned and the original image stacks are used from now on to be able to analyse the differences of the resulting STL files. To reconstruct the surface, object and background pixels are identified and separated by applying the default auto-threshold in ImageJ on the grey values of the images. This algorithm is based on the iterative procedure published by Ridler and Calvard (1978), considering the average values of the background and object pixels and setting the threshold to the arithmetic mean of these values in every iteration. After thresholding, the images are segmented and the pixel values are normalized by the object value to get a pseudo-binary image stack (only values of 0 and 1).
An Overview of Digital Audio Steganography
Published in IETE Technical Review, 2020
Hrishikesh Dutta, Rohan Kumar Das, Sukumar Nandi, S. R. Mahadeva Prasanna
The work described in [94] evaluates four methods of MP3 data hiding namely, unused header bit stuffing, unused side information bit stuffing, empty frame stuffing and ancillary bit stuffing. For all of these post-encoding methods, there are benefits of being independent of parameters such as bit rate, sampling rate, and variable/constant bit rate encoding and also giving a large storage capacity without affecting audio data and without changing the original file size. Unused header (side information) bit stuffing assumes the fact that nothing is going to look at the rarely used bits in an MP3 file header (side information section), overwrites these bits with data without causing any actual damage. Often MP3 files have frames with no valid audio data, usually at the beginning or the end. In empty frame stuffing, data is embedded into such audio data parts that result in a significant amount of storage without affecting audio quality or MP3 file size. The ancillary data is used to store information in ancillary bit stuffing method.