Explore chapters and articles related to this topic
Interactive Graphics Pipeline
Published in Aditi Majumder, M. Gopi, Introduction to Visual Computing, 2018
Rasterization is the last step of the interactive graphics pipeline where all the pixels inside the clipped polygons (triangles may not remain triangles after clipping) have to be computed, and colors and other attributes interpolated from the those of the vertices of the polygon. During the clipping operation, the attributes at the edge‐window intersection points are themselves computed using interpolation of colors at the vertices of the given triangle. The process of rasterization is performed in the graphics hardware. We only provide very basic methods and some key insights of how such methods are made efficient. The buffer in which we draw the color is called the framebuffer and the buffer in which we handle the depth is called the z‐buffer or depth buffer. Both of these buffers are the size of the window defined by the API. We start with a clear framebuffer (all pixels initialized to black) and the depth‐buffer set to 0. Since we will deal with reciprocal of depth in the Z‐buffer, initializing it to 0 means the depth is at ∞.
Geometry operations
Published in Robin Lovelace, Jakub Nowosad, Jannes Muenchow, Geocomputation with R, 2019
Robin Lovelace, Jakub Nowosad, Jannes Muenchow
Rasterization is the conversion of vector objects into their representation in raster objects. Usually, the output raster is used for quantitative analysis (e.g., analysis of terrain) or modeling. As we saw in Chapter 2 the raster data model has some characteristics that make it conducive to certain methods. Furthermore, the process of rasterization can help simplify datasets because the resulting values all have the same spatial resolution: rasterization can be seen as a special type of geographic data aggregation.
HDR Pipeline
Published in Francesco Banterle, Alessandro Artusi, Kurt Debattista, Alan Chalmers, Advanced High Dynamic Range Imaging, 2017
Francesco Banterle, Alessandro Artusi, Kurt Debattista, Alan Chalmers
Rasterization. Rasterization [13] uses a different approach than ray tracing for rendering. The main concept is the projection of each primitive in the scene onto the screen (frame buffer) and, subsequently, its discretization of it into fragments, which are then rasterized into the final image. When a primitive is projected and discretized, visibility has to be solved to have a correct visualization and to avoid incorrect overlap between objects. For this task, the Z-buffer [85] is generally used. The Z-buffer is an image of, typically, the same size as the frame buffer that stores depth values of previous solved fragments. For each fragment at a position x, its depth value, F(x)z, is tested against the stored one in the Z-buffer, Z(x)z. If F(x)z < Z(x)z, the new fragment is written in the frame buffer, and F(x)z is placed in the Z-buffer. After the depth test, lighting is evaluated for all fragments. However, shadows, reflections, refractions, and inter-reflections cannot be handled natively with this process since rays are not shot. These effects are often emulated by rendering the scene from different positions. For example, shadows can be emulated by calculating a Z-buffer from the light source position and applying a depth test during shading to determine if the point is in shadow. This method is known as shadow mapping [420]. The main advantage of rasterization is that it is supported by current graphics hardware, which allows high performances in terms of drawn primitives. Such performance is achieved since it is straightforward to parallelize rasterization: fragments are coherent and independent, and data structures are easy to update. Nevertheless, the emulation of physically based light transport effects (i.e., shadows, reflections/refractions, etc.) is not as accurate as ray tracing and is biased in many cases.
The key performance indicators of projection-based light field visualization
Published in Journal of Information Display, 2019
Peter A. Kara, Roopak R. Tamboli, Oleksii Doronin, Aron Cserkaszky, Attila Barsi, Zsolt Nagy, Maria G. Martini, Aniko Simon
The visualized light field content can be a converted image set, or if there is a 3D mesh, a rasterized or ray-traced content will be seen. Light field images and videos that are captured by a real or virtual camera array are typically the first case. Static scenes and videos can also be rendered by using methods of ray tracing for virtual content generation. Interactive contents like games and applications requiring user input are normally represented by 3D meshes. These meshes are either rasterized or ray-traced before their visualization on the screen of the light field display. Rasterization is computationally less expensive and faster; as such, it is the most common visualization method for such contents. Ray tracing is also possible, as demonstrated by the work of Doronin et al. [29].