Explore chapters and articles related to this topic
Augmenting Haptic Perception in Surgical Tools
Published in Terry M. Peters, Cristian A. Linte, Ziv Yaniv, Jacqueline Williams, Mixed and Augmented Reality in Medicine, 2018
Randy Lee, Roberta L. Klatzky, George D. Stetten
Rendering in computer graphics refers to the process by which images (visual stimuli) are produced from two-dimensional scenes or three-dimensional (3D) models. In the context of haptics, rendering refers to the process by which tactile stimuli—for example, forces or vibrations—are presented to a user to convey information about a virtual object (Salisbury et al. 2004). High-fidelity haptic rendering therefore requires a method to sense user interaction in terms of position, velocity, or force, in multiple degrees of freedom (DoF), and subsequently to use that interaction to generate feedback forces against the user. Most commercially available haptic renderers (e.g., Geomagic Touch, Novint Falcon) are restricted to actuating in only the three translational DoF, using motors and pulleys that act on a stylus or ball. The Magnetically Levitated Haptic Device (MLHD) from Butterfly Haptics can actuate forces in all six DoF albeit over a smaller reachable volume (Berkelman and Hollis 1997, 2000).
Force-System Resultants and Equilibrium
Published in Richard C. Dorf, The Engineering Handbook, 2018
Rendering techniques generally involve setting a color or color function across each component, often based on the effects of a light source from the observer to the screen. For example, a square polygon facing the observer may be red, while its color may change toward darker shades of red as the polygon is rotated, becoming completely dark when 90° from the observer. Shading techniques may compute color as a continuous function across a component. Gouraud shading, for example, interpolates a color value across a polygon from its corner values, while another technique known as Phong shading computes color from an interpolation of the light source vector itself across the polygon. More advanced rendering techniques include ray tracing, which computes the behavior of light rays to simulate effects such as reflectance, shadows, and translucency, and texture mapping, which simulates a pattern or image across surfaces of the displayed model.
The Diverse Domain
Published in Aditi Majumder, M. Gopi, Introduction to Visual Computing, 2018
Rendering is the process of taking as input a 3D scene, a view set up and creating the 2D image of the 3D scene that will be seen from the particular viewpoint. The two main aspects of rendering are the quality of appearance of the 2D image generated and the time it takes.
Adopting GPU computing to support DL-based Earth science applications
Published in International Journal of Digital Earth, 2023
Zifu Wang, Yun Li, Kevin Wang, Jacob Cain, Mary Salami, Daniel Q. Duffy, Michael M. Little, Chaowei Yang
While DL has the potential to bring significant benefits to Earth science, AI/DL applications are often challenging for computing devices since AI/DL algorithms are computing intensive to train and run, and the data sets used in Earth science can be very large and complex but available resources are usually limited. Thus, specialized hardware such as graphics processing units (GPUs) have been leveraged to run AI/DL applications efficiently. GPUs are specialized processors designed to handle the complex calculations required for graphics rendering. It has been widely used in distributed DL model training in recent years since they are well suited for the parallel processing needs of neural networks. With the support of GPUs on performing the matrix multiplications and other mathematical operations, the training process of neural networks could be accelerated significantly compared to that of using a central processing unit (CPU). However, there are a few challenges to consider when using a GPU to accelerate DL. As an example, transferring data between CPU and GPUs can be slow and incur extra costs (Zhang and Xu 2023), which can limit the performance improvements gaining from using GPUs. Thus, a scientific investigation on the performance improvement of using GPU to accelerate DL applications at different scenarios would benefit gaining a better understanding of the utilization of GPU in DL applications.
Virtual forests: a review on emerging questions in the use and application of 3D data in forestry
Published in International Journal of Forest Engineering, 2023
Arnadi Murtiyoso, Stefan Holm, Henri Riihimäki, Anna Krucher, Holger Griess, Verena Christiane Griess, Janine Schweier
The visualization of 3D data is naturally conducted using digital screens, which are currently the most common conduit for digital data (Klippel et al. 2019). The earliest example of 3D data rendering using computers and a 2D screen display is the Sketchpad software (Sutherland 1964), which can be used to draw simple 3D primitives. In modern 3D rendering, the graphics processing unit (GPU) plays an important role. By using a dedicated GPU instead of a computer’s processor or central processing unit (CPU), the amount of rendered data can be increased significantly and rendering can be completed much faster (Palha et al. 2017). Two modern rendering engines are OpenGL (Johansson et al. 2015) and DirectX (Baek and Yoo 2020). Operations within the 3D space also require interaction with the GPU, with the two most common interfaces being NVIDIA’s CUDA and the open standard OpenCL (Ghorpade et al. 2012).
Creation of 3D printed fashion prototype with multi-coloured texture: a practice-based approach
Published in International Journal of Fashion Design, Technology and Education, 2021
Ivonbony Chan, Joe Au, Chupo Ho, Jin Lam
Three-dimensional printing is an additive manufacturing technique to create a solid object. During the manufacturing process, materials are successively assembled on one cross-sectional layer at a time, with the final product comprising many layers (Campbell et al., 2011). The process starts with a 3D digital model that is generated on a computer by using different types of 3D drawing software programs and is referred to as CAD. A software program slices the model into layers and converts the layers into readable files for a 3D printer; subsequently, materials are added layer-wise by the printer to form a 3D object. Three-dimensional virtual objects are created by a combination of 3D modelling, texturing and rendering in CAD design. In the 3D modelling process, polygon mesh and polygons can be assembled together to form shapes. Texture mapping is a method to add illusive textures on 3D virtual objects, whereas texture can be images of pictures or hand paintings. During the rendering process, the texture meshes are unwrapped to one flat image, then texture is projected onto the meshes (Ahearn, 2014; Mullen, 2011). Rendering is a mathematical process to calculate every pixel of the virtual object to produce a final image (Murdock, 2012).