Explore chapters and articles related to this topic
Telescopes for Inner Space: Fiber Optics and Endoscopes
Published in Suzanne Amador Kane, Boris A. Gelman, Introduction to Physics in Modern Medicine, 2020
Suzanne Amador Kane, Boris A. Gelman
Figure 2.9b shows the net effect of these ray-tracing rules on rays of light emanating from the same point on the object a distance do from the lens. These rays are focused to the same point on the other side of the lens at a distance di. (Note that while we've only considered three rays here, all rays emitted from the same point on the original object that pass through the lens are also focused at the same new point as these three particular rays.) In other words, rays of light coming from each point on the original image are focused by the lens at a new location. Figure 2.9b shows that this new location is different for different points on the original object; in fact, by applying these rules to rays from each point on the original object, we could prove that the light is always focused at the new location so as to always preserve the shape of the original object. This new location is that of the image. We call this location an image because rays of light exit from it just as they would from an actual object present at that location. We call it a real image because you could actually project a picture of the object on a screen at that location.
Optics and optical instruments
Published in Andrew Norton, Dynamic Fields and Waves, 2019
(a) A real image is one to which the rays converge. In the real-is-positive convention, a real image’s distance from the lens (or mirror) is always assigned a positive value. A virtual image is one from which rays appear to diverge; its distance from the lens (or mirror) is always assigned a negative value, (b) A real object is one from which the rays diverge (or appear to diverge); its distance from the lens (or mirror) is assigned a positive value. A virtual object is one to which the rays would have converged had they not been intercepted by the lens (or mirror); its distance from the lens (or mirror) is assigned a negative value. (c) Converging lenses and mirrors are assigned positive focal lengths. Diverging lenses and mirrors are assigned negative focal lengths.
Plane Mirrors
Published in Abdul Al-Azzawi, Photonics, 2017
Consider a point source of light placed a distance S in front of a plane mirror at a location O, as shown in Figure 8.2. The distance S is called the object distance. Light rays leaving the light source are incident on the mirror and reflect from the mirror. After reflection, the rays diverge, but they appear to the viewer to come from a point I located “behind” the mirror at distance S’ Point I is called the image of the object at O. The distance S’ is called the image distance. Images are formed in the same way for all optical components, including mirrors and lenses. Images are formed at the point where rays of light actually intersect at the point from which they appear to originate. Solid lines represent the light coming from an object or a real image. Dashed lines are drawn to represent the light coming from an imaginary image. Figure 8.2 shows that the rays appear to originate at I,which is behind the mirror. This is the location of the image. Images are classified as real or virtual. A real image is one in which light reflects from a mirror or passes through a lens and the image can be captured on a screen. A virtual image is one in which the light does not really pass through a mirror or lens but appears either behind the mirror or in front of the lens. The virtual image cannot be displayed on a screen.
Online droplet monitoring in inkjet 3D printing using catadioptric stereo system
Published in IISE Transactions, 2019
Tianjiao Wang, Chi Zhou, Wenyao Xu
The major components of the proposed catadioptric stereo system consist a CCD camera, a mirror and a magnification lens system. The system structure is briefly shown in Figure 1. The mirror can be adjusted to obtain the best field of view and kept fixed during the calibration and detection process. The angle of the mirror to the camera axis is set at 45° to achieve consistent resolution in the X and Y axes. In a traditional multi-camera-based reconstruction system, the synchronization of the camera shutters needs to be carefully considered. As the droplet flies with a fast speed, if the cameras take images at different time stamps of the same droplet, non-negligible operational errors will be introduced, due to the undesired time delay. In the proposed catadioptric system, the camera takes the real image and the virtual image (reflected by the mirror) at the same time, and this addresses the synchronization issue, which is an important advantage for a precise and efficient monitoring system. In addition, the tiny mirror used in this system has a small footprint and significantly increases the accessibility and makes it easier to deploy the system and reduce the cost compared with a multi-camera design.
Stereoscopic augmented reality for single camera endoscopy: a virtual study
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2018
Yen-Yu Wang, Atul Kumar, Kai-Che Liu, Shih-Wei Huang, Ching-Chun Huang, Wei-Chia Su, Fu-Li Hsiao, Wen-Nung Lie
The real images were displayed as background texture of the RW (Figure 3). The 3D virtual model was loaded in the same RW (Fig. 3). The RW has a virtual camera and light source whose position and orientation determines the view of the virtual image. To create a well-aligned composite image from the real image and the virtual image, the RW camera parameters (extrinsic and intrinsic) need to be matched with those of the endoscope camera. The following steps were used to match the parameters of the RW camera to the endoscope camera:The focal length of the endoscope camera was calculated using the camera calibration method described by Zhang et al. (2012). The focal length of RW camera was set to this focal length.The endoscope was positioned to allow a solid and relatively fixed organ, such as the kidney, to be visible in the real image. The 3D virtual model in the RW was moved with the help of the mouse cursor and positioned so that the contour of the organs in the real image and the virtual image overlapped completely (Figure 4). Hence, initial registration of the real image and the virtual image was achieved and the first composite image was produced. In this view of the virtual model, the RW’s camera pose (VCamPose1), the real image (F1) and the virtual image (RenImage1) were stored in the computer memory.The 3D point coordinates (Model3D1) in the RW, corresponding to the pixel coordinates in RenImage1, were identified by a ray tracing method (Nikodym 2010). For each pixel coordinate, ‘ray tracing method’ creates a ‘ray’ vector in the render window. The nearest point where it intersects the 3D object in the render window is taken as the 3D point corresponding to the pixel coordinate. Since F1 and RenImage1 were registered, the 3D points in the solid organ area also corresponded to the pixel coordinates of F1. The pixel coordinates of F1 and the corresponding 3D point coordinates in Model3D1 were stored in the computer memory.