Explore chapters and articles related to this topic
Coastal and Estuarine Waters: Optical Sensors and Remote Sensing
Published in Yeqiao Wang, Coastal and Marine Environments, 2020
Multispectral imaging is remote sensing that obtains optical representations in two or more ranges of frequencies or wavelengths. Multispectral imaging sensors capture image data from at least two or more wavelengths across the electromagnetic spectrum. On the sensor, each channel is sensitive to radiation within a narrow wavelength band resulting in a multilayer image (Figure 4.3) that contains both the brightness and spectral (color) information of the pixels sampled. In a multilayer image, data from each wavelength forms an image that carries some specific spectral information about the pixels in that image. By “stacking” images together from the same area, a multilayer image is formed composed of individual component images. Please note that multispectral images do not produce the spectrum of a pixel because wavelengths may be separated by filters or by the use of instruments that are sensitive to particular bands in the spectrum. Multispectral images are the main type of images acquired by most spaced-based or airborne radiometer systems. Radiometers are devices that measure the flux of electromagnetic radiation. Satellites may carry many radiometers in order to acquire data from selected portions of the electromagnetic spectrum. For example, one radiometer may acquire data from wavelengths in the red–green–blue (RGB) [700–400 nanometers (nm)] portion of the visible spectrum, a second radiometer may acquire data from wavelengths in the near infrared (700–3000 nm), and another might acquire data from mid-infrared to thermal region (greater than 3000 nm).
Fundamentals
Published in N.C. Basantia, Leo M.L. Nollet, Mohammed Kamruzzaman, Hyperspectral Imaging Analysis and Applications for Food Quality, 2018
The best way to overcome both problems is to reduce the number of wavelengths recorded by the hyperspectral system, thus developing a multispectral system. Multispectral imaging systems are based on the same concept as hyperspectral imaging, but use far fewer wavebands (~2–20) which are irregularly spaced, as opposed to the continuous spectra in hyperspectral image data. While a typical hyperspectral instrument acquires ~250 wavebands, only a few are highly related to certain quality attributes. Therefore, one may select a handful of optimal wavebands and preserve only the most relevant information. By minimising the size of the hypercube, redundant information is eliminated from the chemometric prediction model and real-time predictions are possible due to improved analysis speeds. Furthermore, due to the issue of multicollinearity associated with spectroscopic techniques utilising the NIR region, it has been found that the high spectral resolution of hyperspectral images does not necessarily lead to increased analytical precision, and the accuracy of predictions may be improved by removing redundant wavebands (Elmasry et al., 2012). If a waveband does not carry useful spectral information, it will only add noise to a chemometric model, and thus reducing the high dimensionality of the hyperspectral image dataset may help overcome this issue. Lastly, the cost of the system is also significantly reduced if only a handful of key wavelengths are used as an instrument can be built using simpler, and thus less expensive, components.
Sensing for the Color of the Perfect Tomato
Published in Denise Wilson, Sensing the Perfect Tomato, 2019
A less cumbersome but often equally effective alternative to hyperspectral imaging is multispectral imaging. Multispectral imaging evaluates light scattered from objects, including plants and crops at specific wavelengths rather than a whole spectrum of wavelengths, as is the case with hyperspectral imaging. Ideally, these limited and specific wavelengths of incident light (and resulting scattered light) are chosen to provide the greatest resolution and information for monitoring what is of interest. For tomatoes, specific wavelengths within the red and green bands of the visible spectrum (Figure 5.2a) can be chosen to optimize resolution and accuracy in monitoring ripeness and optimal harvesting times. Incident light wavelength can be controlled in two different ways, either by choosing a light source to cover the desired wavelength or by filtering a broad spectrum light source to select only the wavelengths of interest. The former method requires narrow-bandwidth light sources such as lasers, which can be expensive and limited in availability. The latter method uses filters, which, while less expensive than lasers, attenuate the intensity of the incident light source on the fruit and, as a result, can limit the accuracy of resulting scattered light measurements. Light-emitting diodes (LEDs) are another option for multispectral imaging, but LEDs have a broader color (wavelength) bandwidth than lasers and can limit color resolution for the purposes of tracking ripening on a day-to-day basis. LEDs also do not deliver the power and intensity that lasers do, which can be advantageous for decreasing power consumption on mobile devices but may not provide sufficient illumination for accurate multispectral images.
Deep learning approach for hyperspectral image demosaicking, spectral correction and high-resolution RGB reconstruction
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2021
Peichao Li, Michael Ebner, Philip Noonan, Conor Horgan, Anisha Bahl, Sébastien Ourselin, Jonathan Shapey, Tom Vercauteren
Intraoperative hyperspectral imaging (HSI) provides a non-contact, non-ionising and non-invasive solution suitable for many medical applications (Lu and Fei 2014; Shapey et al. 2019; Clancy et al. 2020). HSI can provide rich high-dimensional spatio-spectral information within the visible and near-infrared electromagnetic spectrum across a wide field of view. Compared to conventional colour imaging that provides red, green, and blue (RGB) colour information, HSI can capture information across multiple spectral bands beyond what the human eye can see, thereby facilitating tissue differentiation and characterisation. Unlike fluorescence and ultrasound imaging, HSI exploits the inherent optical characteristics of different tissue types. It captures the measurements of light that provide quantitative diagnostic information on tissue perfusion and oxygen saturation, enabling improved tissue characterisation relative to fluorescence and ultrasound imaging (Lu and Fei 2014). Depending on the number of acquired spectral bands, hyperspectral imaging may also be referred to as multispectral imaging, but for simplicity the hyperspectral terminology will be used. Single hyperspectral image data typically span three dimensions, two of them represent 2D spatial dimensions and the other represents spectral wavelengths, as illustrated in Figure 1(a). Therefore, 3D HSI data are thus often referred to as hyperspectral cubes, or hypercubes in short. In addition, time comes as a fourth dimension in the context of dynamic scenes, such as those acquired during surgery. Hyperspectral cameras can broadly be divided into three categories based on their acquisition methods, namely spatial scanning, spectral scanning and snapshot cameras (Shapey et al. 2019; Clancy et al. 2020). Spatial scanning acquires the entire wavelength spectrum simultaneously on either a single pixel or a line of pixels using linear or 2D array detector, respectively. The camera will spatially scan through pixels over time to complete the hyperspectral cube capturing. Spectral scanning, on the other hand, is able to capture the entire spatial scene at a certain wavelength with a 2D array detector, and then switches to different wavelengths over time to complete scanning. These two types of spectral cameras are able to acquire hyperspectral data with high spatial and spectral resolution, but long acquisition times prevent them from providing live image displays suitable for real-time intraoperative use.