Explore chapters and articles related to this topic
Super-resolution Optical Microscopy with Structured Illumination
Published in Guy Cox, Fundamentals of Fluorescence Imaging, 2019
Liisa M. Hirvonen, Trevor A. Smith
Photo-toxicity, particularly relating to biological samples, can impose severe limitations on the application of any optical microscopy technique. SIM affords some advantages in this respect over certain other super-resolution techniques, being a widefield (nonpoint scanned) method. However, the excitation light flux should always be minimized as far as possible, often to the level that makes low signal-to-noise ratio in the images problematic. Modern electron multiplying charge-coupled device (EMCCD) or scientific complementary metal–oxide–semiconductor (sCMOS) cameras used in SIM apparatus exhibit remarkably low noise levels, but noise is still a major issue. Some success at overcoming noise issues has been demonstrated using denoising algorithms [46–48], prior to the image reconstruction process. The effect of the ND-SAFIR denoising algorithm is illustrated in Fig. 15.9, for SEM an image of polystyrene microspheres. The ability to resolve two closely lying beads (middle of yellow profile line; Fig. 15.9b) is enhanced after denoising (Fig. 15.9b), as illustrated in the topology resulting from the profile line (Fig. 15.9c).
Clinical Mammographic and Tomosynthesis Units
Published in Paolo Russo, Handbook of X-ray Imaging, 2017
As reported by Padole et al. (2015) conventional FBP is associated with a relatively high image noise and artifacts at reduced doses, because it is based on some mathematical assumptions, ignoring some key information, like statistics of X-ray photons and system model. Iterative techniques iterate the image reconstruction a given number of times to better estimate these mathematical assumptions and generate images with lower noise. Because of the limited angles used in tomosynthesis compared to the full angular coverage of CT, image reconstruction in DBT is affected by different types of artifacts. A typical artifact is that produced by high contrast objects in the breast, like metallic clips, or large calcifications: the high contrast object is present, even shifted, in several planes adjacent to the in-focus plane. This artifact can be corrected by several algorithms (Wu et al. 2006). Another common artifact is the truncation artifact, due to the limited size of the image detector, which produces bright horizontal lines and a saturated area close to the detector edge. The truncation artifact can be minimized by reconstructing a volume a little larger than that corresponding to the compressed breast thickness, and applying different types of algorithms (Sechopoulos 2013).
Optimization and Dose Reduction in Nuclear Medicine
Published in Lawrence T. Dauer, Bae P. Chu, Pat B. Zanzonico, Dose, Benefit, and Risk in Medical Imaging, 2018
As the field is striving to reduce radiation burden to patients, there is active research into developing reconstruction algorithms that can produce betting images with less detected counts. Much of the clinical dose reduction innovation in tomographic nuclear imaging in the last decade has been enabled through advancements in data processing and image generation. For example, image reconstruction has progressed from filtered back projection techniques to now common iterative ones. Data processing and image generation remains an active area of development and can support a “do more with less” strategy to improve optimization.
Deep learning for photoacoustic tomography from sparse data
Published in Inverse Problems in Science and Engineering, 2019
Stephan Antholzer, Markus Haltmeier, Johannes Schwab
Applying standard algorithms to sparse data yields low-quality images containing severe undersampling artefacts. To some extent, these artefacts can be reduced by using iterative image reconstruction algorithms [30–36] which allow to include prior knowledge such as smoothness, sparsity or total variation (TV) constraints [37–42]. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have their own limitations. For example, the reconstruction quality strongly depends on the used a-priori model about the objects to be recovered. For example, TV minimization assumes sparsity of the gradient of the image to be reconstructed. Such assumptions are often not strictly satisfied in real world scenarios which again limits the theoretically achievable reconstruction quality.
Hybrid spectrum conjugate gradient algorithm in electromagnetic tomography
Published in Instrumentation Science & Technology, 2023
Liu Li, Yue Luo, Qian Zhao, Zhanjun Wang
Electromagnetic tomography includes the forward and inverse problems. The forward problem is the process of obtaining and analyzing the value when the spatial distribution of the object field is known, such as establishing the physical model, extracting the value, and constructing the sensitivity matrix. Image reconstruction is the main content of the inverse problem. At present, the main image reconstruction algorithms include the LBP algorithm, Tikhonov regularization algorithm, iterative algorithm, conjugate gradient (CG) algorithm, and neural network method.[6–8]
Attenuation correction of polychromatic L-shell X-ray fluorescence computed tomography imaging
Published in Journal of Nuclear Science and Technology, 2019
Long Liu, Xiaolin Zhou, Huanlong Liu, Ning Ding
Many reconstruction methods were presented to reconstruct XFCT images. Those image reconstruction methods include analytical methods and iterative methods. Filtered Back Projection is the typical method of analytical methods while MLEM is that of iterative methods [19]. Considering the weight values changed with the platinum concentration, MLEM algorithm was chosen as the polychromatic L-shell XFCT reconstruction method. MLEM algorithm can be expressed as (8) [20].