Explore chapters and articles related to this topic
Smart Software for Real-Time Image Analysis
Published in Abdel-Badeeh M. Salem, Innovative Smart Healthcare and Bio-Medical Systems, 2020
Marius Popescu, Antoanela Naaji
The reflection or mirroring converts the original image so that the pixels are reflected from an axis specified in the new position in the target image. The reflection may be relative to a horizontal axis of order or to a vertical axis of abscissa. Mirroring may be after an axis pointing in an arbitrary direction also, which passes through the point of origin. In general, image processing involves two essential procedures: fusion of images and pseudo-coloring of images. Image fusion is a process of combining relevant information from a set of images into a single image, where the resulting image will contain more information than any of the images used before the merging [7]. Various methods have been developed to achieve the fusion of images. Some regions of the images, which are focused, have pixels of greater intensity than the rest.
Fusion of Images from Different Electro-Optical Sensing Modalities for Surveillance and Navigation Tasks
Published in Rick S. Blum, Zheng Liu, Multi-Sensor Image Fusion and Its Applications, 2018
Figure 7.12 shows the results of all scene recognition and target detection tasks investigated here. As stated before, the ultimate goal of image fusion is to produce a combined image that displays more information than either of the original images. Figure 7.12 shows that this aim is only achieved for the following perceptual tasks and conditions: – The detection of roads, where CF1 outperforms each of the input image modalities;– The recognition of water, where CF1 yields the highest observer sensitivity; and– The detection of vehicles, where three fusion methods tested perform significantly better than the original imagery.
Image Registration and Fusion
Published in Jitendra R. Raol, Ajith K. Gopal, Mobile Intelligent Autonomous Systems, 2016
In image fusion the idea is to obtain a fused image from two or more individual images of the same object or scenario to enhance the overall information. Image fusion is the process of combining relevant information from two or more images into one composite image. The input images may be captured by different sensors or by the same sensor under different conditions. The resulting image will contain more accurate descriptive information than any of the input images. The images have to be registered before they are fused. Image fusion techniques fall into the category of being pixel-level based, feature based or decision based according to the level at which they are applied. One example is the fusion of magnetic resonance imaging (MRI) and computer-aided tomography (CT) images in medical imaging. As these images contain complementary information, the fused image gives an enhanced image with more details.
An automatic fusion algorithm for multi-modal medical images
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2018
Mst. Nargis Aktar, Andrew J. Lambert, Mark Pickering
The different categories of image fusion include pixel-level fusion, feature-level fusion and also decision-level fusion (Jiang and Wang 2014; Yang and Li 2012; James and Dasarathy 2014; Li et al. (2017)). In pixel-level fusion, the fusion is applied on the pixel values of the images. Similarly, feature-based fusion is applied on the different extracted features of the source images and decision-level fusion is applied on higher level descriptionsof the input images (Jiang and Wang 2014). This paper will focus on pixel-level image fusion algorithms which have the advantage that the original image information can be fused both in the spatial and transform domains (Sahu and Parsai 2012; Rani and Sharma 2013; Yang and Li 2012).
Infrared and visible images fusion by using sparse representation and guided filter
Published in Journal of Intelligent Transportation Systems, 2020
Qilei Li, Wei Wu, Lu Lu, Zuoyong Li, Awais Ahmad, Gwanggil Jeon
Depending on the processing domain, the current major infrared and visible images fusion methods can be divided into two categories (Zhou & Omar, 2009): spatial domain methods and transform domain methods. The spatial domain methods directly fuse the infrared and visible pair via the fusion rules. The classical fusion method is to average the infrared image and the visible image. Regrettably, the fused images obtained by this method are often unsatisfactory. To address this problem, Li, Kang, and Hu (2013) guided filter (GF) decomposed the infrared and visible images pairs into base layers and detail layers, then fuse them by using the GF. However, the activity level measurement in Li et al. (2013) is not accurate enough. Miao and Wang (2005) gauged the activity level in accordance with the image gradients. Zhou, Wang, Li, and Dong (2016) combined Gaussian filter with the bilateral filter to fuse the hybrid multi-scale decomposed images, while the decomposing process will consume much time. Recently, some deep learning-based image fusion methods were proposed. Liu, Chen, Peng, and Wang (2017) pioneered the use of convolutional neural networks for image fusion. For performance improvement, the Laplacian pyramid was used for multi-scale decomposition, and the image self-similarity was used to optimize the network model (Liu, Chen, Cheng, Peng, & Wang, 2018). Following a different direction, Ma, Yu, Liang, Li, and Jiang (2018) proposed a “FusionGAN” model for infrared and visible images fusion based on generative adversarial network (GAN). Although these spatial domain methods can achieve good fusion results, they also cause many negative effects. They result in excessive smooth transitions at the edges, reduced contrast, and spectral distortion of the image.
A novel image fusion algorithm using an NSCT and a PCNN with digital filtering
Published in International Journal of Image and Data Fusion, 2018
Guoan Yang, Chihiro Ikuta, Songjun Zhang, Yoko Uwate, Yoshifumi Nishio, Zhengzhi Lu
For many kinds of images, such as visible images, multifocus images, multisensor images, synthetic aperture radar (SAR) images and infrared images, image fusion processing can make their respective advantages mutually complementary. Therefore, the image fusion algorithm is an important part of both image processing and computer vision research (Zheng et al. 2007, Yang and Li 2010, Wang et al. 2013a). In recent years, several new image fusion algorithms have been reported and widely applied in image processing, computer vision, pattern recognition and so on (Qu et al. 2008, Li and Yang 2010, Jang et al. 2012).