Explore chapters and articles related to this topic
Preliminaries
Published in Wong Gabriyel, Wang Jianliang, Real-Time Rendering: Computer Graphics with Control Engineering, 2017
Shaders come in two formats: vertex and pixel types. A vertex shader is a graphics processing function used to add special effects to objects in a 3D environment. It is executed once for each vertex sent to the graphics processor. The purpose is to transform each vertex’s 3D position in virtual space to the 2D coordinate at which it appears on the screen and the as a depth value in the graphics hardware. A pixel shader is a computation kernel function that computes colour and other attributes of each pixel. Pixel shader functions range from always outputting the same colour to applying a lighting value to adding visual effects such as bump mapping, shadows, specular highlights, and translucency properties. They can alter pixel depth or output more than one colour if multiple render targets are active. Figure 2.4 illustrates an example of the effects of pixel shaders on a 3D object. Apart from vertex and pixel shaders, an important feature of state-of-the-art graphics rendering architectures is the functionality of geometry shaders. Geometry shaders are added to the rendering pipeline to enable generation of graphics primitives, such as points, lines and different types of triangles after the execution of vertex shaders. With this capability, it is then possible to perform operations such as mesh resolution manipulation and procedural geometry generation.
Hiding Media Data via Shaders: Enabling Private Sharing in the Clouds
Published in Kaikai Liu, Xiaolin Li, Mobile SmartLife via Sensing, Localization, and Cloud Ecosystems, 2017
Fig. 11.3 shows our designed secure media sharing process to open social media channels. We leverage the image key in addition to the normal key for better security. To meet the design objective of easy-to-use and low-complexity computation, we integrate our proposed privacy-preserving techniques into one customized image filter. The image filter works in the raw image domain and does not require image format compliance. A highly integrated block could simplify the integration process to existing code. To improve the efficiency for the pixel-wise computation in our approach, we design and implement this customized image filter in the GPU via the OpenGL Shader. Shader is a program designed to run on some stage of a graphics processor, and written in the OpenGL Shading Language. We utilize the Fragment Shaders in the OpenGL rendering pipeline (after Rasterizer) for the pixel manipulation required in our proposed algorithm. The size covered by a fragment is related to the pixel area. Thus, the computationally intensive pixel-by-pixel operation could be converted to fragment processing with highly paralleled implementation in GPU. The reason that we utilize normalization and block-based processing in algorithm design is to fit the GPU Shader processing framework for high computation efficiency.
Close to reality surrounding model for virtual testing of autonomous driving and ADAS
Published in Johannes Edelmann, Manfred Plöchl, Peter E. Pfeffer, Advanced Vehicle Control AVEC’16, 2017
F. Chucholowski, C. Gnandt, C. Hepperle, Sebastian Hafner
Visible images originate from propagation of light and spectral reflection on surfaces. The various kinds of reflections defined by material parameters are calculated by shaders. Shaders are programs that describe the characteristics of either a vertex or a pixel for a desired appearance. These rendering effects are calculated on the GPU providing high performance and a high degree of flexibility (Randima, & Kilgard 2003). Therefore it is possible to model several physical effects, which are disregarded in most computer games in favor of appealing graphics. According to the required model accuracy, objects and surfaces are defined by varied detail of meshes, surface mapping, or pre-rendered textures. To solve conflicting performance and accuracy demands, these methods are switched automatically during simulation based on view distance and other parameters (Unity Technologies 2016).
Speeding Up Monte Carlo Computations by Parallel Processing Using a GPU for Uncertainty Evaluation in accordance with GUM Supplement 2
Published in NCSLI Measure, 2018
C. M. Tsui, Aaron Y. K. Yan, H. W. Lai
GPUs are designed to render 3D computer graphics in real time. They have highly parallel architecture optimized for manipulating pixels in frame buffers. In the early days, GPUs had limited programmability and were seldom used outside the graphics processing area. The situation started to change in 2007. The architectures of GPUs from two major suppliers, Nvidia and AMD, became more flexible so that the hardware allowed re-configuration to support different shader stages (A “shader” is computer jargon referring to a short program for handling graphics.) in the graphical rendering pipeline (the unified shader model). People began to realize that with this flexibility, it became feasible to deploy the highly parallel architecture of GPUs for general scientific computing applications.
SSVEP-based brain–computer interface for music using a low-density EEG system
Published in Assistive Technology, 2022
Satvik Venkatesh, Eduardo Reck Miranda, Edward Braund
In order to utilize hardware-accelerated rendering and vertical synchronization (VSync), the visual stimulus is implemented with the help of Open Graphics Library (OpenGL). Vertex shader and fragment shader programs were written in OpenGL shading language (GLSL). The vertex shader specifies the coordinates of the flashing squares and the fragment shader varies the luminance of the region. The luminance is varied by Equation (1) with the help of sinusoidal stimulation (Manyakov et al., 2013).