Explore chapters and articles related to this topic
Increase of Frame Rate in Virtual Environments by Using 2D Digital Image Warping
Published in Krzysztof Gałkowski, Jeff David Wood, Multidimensional Signals, Circuits and Systems, 2001
2. Geometric Transformation: Here, pcr-vcrtcx operations on the geometrie primitives (surfaces, polygons, lines, points) are done. Geometric transformation includes modeling transformations of vertices and normals from eye coordinates to world coordinates. lighting calculations on a pcr-vcrtex basis, clipping of all objccts outside the viewing frustum, and the projection of the scene onto the two-dimensional screen coordinates. From these computations, the lighting calculations are the most complex ones (Clay 1996). For every additional global light source a re-computation has to be done, and significant effort is added by distance attenuation and local lights.
Tracking and Calibration
Published in Terry M. Peters, Cristian A. Linte, Ziv Yaniv, Jacqueline Williams, Mixed and Augmented Reality in Medicine, 2018
Optical tracking systems are either videometric or infrared (IR)-based. In a videometric tracking system, the poses of marker patterns are determined on video image sequences using one or more cameras. Using only a monoscopic camera, the pose of a planar marker can be determined through homography. A dictionary of unique markers can be built, such as those freely available in ArUco [3], allowing robust tracking of multiple patterns simultaneously. The size of these planar patterns must be optimized with respect to the size of the viewing frustum to ensure visibility and tracking accuracy.
Smart Eye Tracking Sensors Based on Pixel-Level Image Processing Circuits
Published in Iniewski Krzysztof, Integrated Microsystems, 2017
In the monocular case, a typical 2D image-viewing application is expected, and the coordinates of the eye tracker are mapped to the 2D (orthogonal) viewport coordinates accordingly (the viewport coordinates are expected to match the dimensions of the image being displayed in the HMD). In the binocular (VR) case, the eye tracker coordinates are mapped to the dimensions of the near viewing plane of the viewing frustum. The following sections discuss the mapping and the calibration of the eye tracker coordinates to the display coordinates for the monocular applications.
Analytical model of multiview autostereoscopic 3D display with a barrier or a lenticular plate
Published in Journal of Information Display, 2018
Vladimir Saveljev, Irina Palchikova
The image region is a volume for locations potentially seen by an observer at base b. It is a functional analogue of the viewing frustum in 3D computer graphics. The image region occupies the array of light sources: the screen together with the adjacent space in front and behind them.