Explore chapters and articles related to this topic
Statistical Calculations
Published in Julio Sanchez, Maria P. Canton, Software Solutions for Engineers and Scientists, 2018
Julio Sanchez, Maria P. Canton
Computer data used in statistical calculations can originate from many conceivable sources, and be formatted in virtually unlimited ways. Raw data can come directly from sensors and instruments. Primary data can be stored in any type of device and in multiple file formats. Processed data can be in standard or in proprietary formats and data types. For example, the Flexible Image Transport System (FITS) was developed by NASA’s Science Office of Standards and Technology (NOST) to provide for the interchange and storage of astronomical data sets. The basic document describing the FITS standard is over 70 pages long and the data can be represented in five different data types. On the other hand, the Jet Propulsion Laboratory of the California Institute of Technology has developed the Planetary Data System (PDS) to further refine astronomical data that specifically relates to planetary science. The documentation for this standard extends to over two hundred pages.
Analysis of Machine Learning Techniques for Airfare Prediction
Published in Lavanya Sharma, Mukesh Carpenter, Computer Vision and Internet of Things, 2022
Jaskirat Singh, Deepa Gupta, Lavanya Sharma
The data collected is in the form of raw data with values of more than one data type but when the training of data will take place the machine learning (ML) algorithm requires the data in numerical format (i.e., integers). The data is required to be changed into the numerical format for reducing time complexity. So, this is not as simple as we think it is. It includes a lot of working on raw data to understand data by methods such as data visualization and data modeling. Different algorithms will be applied and we will get the predictions with their accuracy percentages and also the parameters required [9–16].
Digital twin of tunnel construction for safety and efficiency
Published in Daniele Peila, Giulia Viggiani, Tarcisio Celestino, Tunnels and Underground Cities: Engineering and Innovation meet Archaeology, Architecture and Art, 2020
R. Tomar, J. Piesk, H. Sprengel, E. Isleyen, S. Duzgun, J. Rostami
The remainder of the section describes the construction of a virtual tunnel based on LiDAR scanning. Point cloud of Colorado School of Mines’ Edgar Experimental Mine is collected with LiDAR (Figure 2). The raw data requires data cleaning and noise filtering. After pre-processing of the data, the point cloud is transformed into a polygon mesh (Figure 3).
Automated Neural Network-based Survival Prediction of Glioblastoma Patients Using Pre-operative MRI and Clinical Data
Published in IETE Journal of Research, 2023
Gurinderjeet Kaur, Prashant Singh Rana, Vinay Arora
The 3D brain MRI NifTI scans are converted into PNG format in this pre-processing stage. Firstly, label 4 for ET (yellow colour) is modified to label 3 because label 3 is missing in segmentation labels. All the modalities are read using a nibabel Python package to process neuroimaging file format like NifTI files. These MRI scans are normalized before performing segmentation. The height and width of scans are kept the same, i.e. 240 × 240 but the depth is changed from 155 slices to one slice. Raw modalities are saved as PNG files with RGBA format. The FLAIR, T1ce, T1 and T2 modalities are saved in Red, Green, Blue and Alpha channels, respectively, with a single slice at a time. Thus for each patient, 155 PNG files are generated with all four modalities are four channels in RGBA format. Similarly, segmentation labels are also saved as PNG files with the same labels in all channels of RGB format.
Sources of Error in HDRI for Luminance Measurement: A Review of the Literature
Published in LEUKOS, 2021
Sarah Safranek, Robert G. Davis
Camera sensors have varying methods for capturing spectral information, contributing to error in the interpretation of color from the HDR images used to capture luminance measurements. The individual pixels of the camera sensor are unable to distinguish between wavelengths of incoming light, so to allow for color capture, red, green, and blue (RGB) filters are placed over the individual pixels. Typically, the RGB arrays are arranged in a Bayer pattern made of 25% red filters, 50% green filters, and 25% blue filters. A final RGB value for each pixel is determined during image processing on board the camera and involves multiple interpolations from surrounding pixels. The exact interpolation processes for a given camera models remain propriety information, but are known to vary among manufacturers and sensors types (Teman 2017). While color sensors can distinguish between millions of colors, they cannot fully depict the color spaces corresponding to human perception. Color information is recorded differently depending on whether the image is saved in a RAW or JPEG format. RAW images contain the unprocessed image data directly from the camera sensor and consequently require considerable storage space. Although JPEG file sizes are smaller, the pixel values are compressed and correction adjustments like white-balancing, tone curves, and color saturation are applied. Both file formats are used often in HDRI application studies, but RAW is recommended over JPEG, especially if there is significant color information in the scene (Stanley 2016; Teman 2017; Tyukhova and Waters 2013; Varghese et al. 2014).
Tutorial: Luminance Maps for Daylighting Studies from High Dynamic Range Photography
Published in LEUKOS, 2021
C. Pierson, C. Cauwerts, M. Bodart, J. Wienold
The camera should be set on a tripod to avoid misalignment problems during the HDR generation (Reinhard et al. 2006). Depending on the number of LDR images to capture, the images can be either saved in jpeg format, which is a compressed file, or in raw format, which is uncompressed and requires around four times more storing memory. The benefit of raw images is that they contain unprocessed electrical charge information directly from the image sensor (Stanley 2016). Therefore, the generation of HDR images from raw files does not require the preliminary derivation of the response function. raw2hdr program (Ward 2011), from the Radiance suite, is the equivalent of hdrgen for raw images input; all following calibration steps are similar. Although jpeg compression causes the loss of much color data (4096 tonal levels in raw files compared to 256 levels of information for each channel in jpeg files (Stanley 2016)), it is decent at preserving luminance data, and is much more convenient to store. Moreover, when shooting in raw format, the maximum burst during continuous shooting, namely the number of continuous shots that can be taken by the camera without stopping, might be limited to a number lower than the 15 recommended LDR images.