Advances in verification and delivery techniques
Jing Cai, Joe Y. Chang, Fang-Fang Yin in Principles and Practice of Image-Guided Radiation Therapy of Lung Cancer, 2017
where and denote the symmetric energy function. A is the projection matrix, with coefficients of . denotes the forward DVF and denotes the inverse DVF. The inverse consistent constraint is shown in the last line of Equation 15.3. The energy function contains a data fidelity term and a smoothness constraint term. is a parameter that controls the tradeoff between the two terms. is a free-form energy of the deformation fields [40,41].
Brain–Computer Interfaces for Mediating Interaction in Virtual and Augmented Reality
Chang S. Nam, Anton Nijholt, Fabien Lotte in Brain–Computer Interfaces Handbook, 2018
At a minimum, most VR/AR setups manipulate the input to the user’s sense of vision. The least immersive technique is “desktop-based VR,” where three-dimensional (3D) objects are rendered to a 2D computer monitor (e.g., Scherer et al. 2012). This technique has the advantage of being the least expensive and simplest to implement. Some 2D monitors support the use of shutter-glasses or other techniques to create a 3D illusion by presenting an adjusted image for both the left and right eyes (Marathe et al. 2008). 3D projection walls typically create a 3D illusion in a similar way, but offer a larger field of view (Slobounov et al. 2015). Head-mounted displays (HMDs) generally have a higher level of immersion. They are fixed to the user’s head and present separately rendered images for each eye. HMDs track the movement of the user’s head so that the images adapt to the orientation of the head (e.g., Faller et al. 2016; see also Figure 12.1a). At a similar, high level of immersion, CAVE audiovisual experience automatic virtual environments (CAVE; Cruz-Neira et al. 1992; see also Figure 12.1b) position the user in a box, where images are projected onto each wall. Like with HMDs, the images are dynamically rendered and take into account head orientation.
2D plastic scintillation dosimetry for photons
Sam Beddar, Luc Beaulieu in Scintillation Dosimetry, 2018
The square root term in Equation 13.6 is a geometric factor that is null when the dose pixel is outside the volume of the scintillating fiber. Further details about the projection matrix can be found elsewhere [27]. The scintillation efficiency and the optical attenuation of each fiber were determined by a calibration procedure similar to the one used with the fluence detector. Equation 13.4 was then solved using an iterative reconstruction algorithm.
A fusion learning method to subgroup analysis of Alzheimer's disease
Published in Journal of Applied Statistics, 2023
Mingming Liu, Jing Yang, Yushi Liu, Bochao Jia, Yun-Fei Chen, Luna Sun, Shujie Ma
Another problem is how to choose the working covariance matrix jth diagonal element of the projection matrix i. This modification is suggested by [26]. Given (4), we have ρ by taking the average of the estimated correlation between the two adjacent time points, in which we only consider the adjacent time points having the scaled distance equalling 1, i.e.
A shape-partitioned statistical shape model for highly deformed femurs using X-ray images
Published in Computer Assisted Surgery, 2022
Jongho Chien, Ho-Gun Ha, Seongpung Lee, Jaesung Hong
To match a 3D femur surface model with 2D C-arm images, a feature matching method was adopted. The first step is the estimation of the pose parameters, which reflect the changes in translation, rotation, and scale between 2D images and a 3D model. The second step is the projection of the 3D vertices of a femur model onto the 2D C-arm image plane. Based on the intrinsic parameters of the C-arm calibration procedure, a virtual X-ray source is assumed, and the vertices in the 3D models are projected onto 2D image points using a projection matrix consisting of intrinsic and extrinsic parameters as its elements. The X-ray imaging system is analyzed using a regular pinhole camera model [23]. The intrinsic parameters are determined by the focal length of the camera, the width and height of a pixel on the projection plane, and the principal points, whereas the extrinsic parameters are determined by the rotation and translation between the camera coordinates and the world coordinates [23,24]. The third step is to extract the outer vertices from the projected vertices, as shown in Figure 2(b). Procrustes analysis on these boundary points and the boundary points obtained from C-arm images are used to estimate the pose parameters between the 2D images and the 3D models.
Novel diagnostic and imaging techniques in endovascular iliac artery procedures
Published in Expert Review of Cardiovascular Therapy, 2020
Sanne W. De Boer, Stefan G.H. Heinen, Seline R. Goudeketting, Michiel W. De Haan, Barend M. Mees, Daniel A. F. Van Den Heuvel, Jean-Paul P. M. De Vries
The possible major advantage of this novel technique is that it provides real-time 3D feedback on the position of the guidewire or catheter. The endovascular specialist will be able to navigate a guidewire or catheter without the use of fluoroscopy in combination with 3D information, resulting in a reduction of radiation exposure and iodinated CM use. The 3D projection capabilities also exceed those of conventional 2D fluoroscopy in a complex and elongated anatomy, because the endovascular specialist can use different views of the tracked device simultaneously. Current drawbacks are that only 2 devices can be tracked at the same time, and only FORS enabled devices can be tracked using the proprietary hardware and software.
Related Knowledge Centers
- Field of View
- Focal Length
- Focus
- Field of View
- 3D Computer Graphics