Explore chapters and articles related to this topic
Embedding Parallelism In Image Processing Techniques And Its Applications
Published in Sanjay Saxena, Sudip Paul, High-Performance Medical Image Processing, 2022
Suchismita Das, G. K. Nayak, Sanjay Saxena
High performance computing (HPC) or parallel computing has become an integral part in today’s mainstream computing system. Hence GPU is pivotal for handling time consuming image processing techniques and algorithms. GPU is a processor on a graphics card to compute highly parallel calculation. Its main implementation is to transform, interpret, and quicken graphics. Instead of a CPU it contains millions of transistors specializing in arithmetic of floating point. It has revolutionized 3-D graphics revolution and empowers us to run HD graphics. While the CPU is a serial processor, the GPU is a stream processor. It makes use of control logic to accomplish the execution of many threads while maintaining resources of a sequential execution. In Figure 11.3 (taken from a lecture note on GPU architecture and CUDA programming), it shows how the CUDA performs 1D convolution operation and producing output in different core in parallel which in turns reduces the time effectively.
Using computer software packages to assist engineering activities
Published in David Salmon, Penny Powdrill, Mechanical Engineering Level 2 NVQ, 2012
Your computer is connected to the monitor by a video card. A video card is a circuit board and its job is to translate the information from the computer into words on the screen or pictures. Video cards may also be called graphics cards.
A networked smart home system based on recurrent neural networks and reinforcement learning
Published in Systems Science & Control Engineering, 2021
In the training, the initial temperature indoor is 0 and the initial humidity is 80, and P is 30. is the square root function and is the distance from the state to the ideal state. The number of temperature observer, humidity observer and dust observer are set to be 1, the light intensity is set to a constant value, and the wind speed is measured using equipment to set the wind speed, as shown in Figure 4. In the environment, the intelligent networking equipment includes air conditioner, humidifier/dehumidifier and air cleaner. The simulation works on a computer with Intel Core i7-10700, 64G memory and an NVIDIA 2080ti graphics card. The soft environment is Python/Pytorch.
Desktop versus immersive virtual environments: effects on spatial learning
Published in Spatial Cognition & Computation, 2020
Jiayan Zhao, Tesalee Sensibaugh, Bobby Bodenheimer, Timothy P. McNamara, Alina Nazareth, Nora Newcombe, Meredith Minear, Alexander Klippel
The VE was not changed from Experiment 1 except that the buildings were named differently for the desktop continuous travel condition (see footnote 11). For the Vive teleportation condition, the VE and the travel approach were the same as in Experiment 1, while participants were seated in a swivel chair that fixed their physical location to the center of the tracking area. Specifically, Vive teleportation participants were allowed to turn their heads and bodies to look around, but their physical walking was constrained by the chair. The HTC Vive used in the Vive teleportation condition was identical to Experiment 1. The travel approach for the desktop continuous travel condition was that of the standard Virtual Silcton paradigm. Specifically, desktop continuous travel participants pressed the arrow keys to perform translation movements in four degrees of freedom (i.e., forward, backward, left, and right) to mimic continuous travel with optic flow and moved the mouse to look around in both horizontal and vertical directions. The translating velocity was constant at 5 meters/second. The desktop VE was displayed on a 60 cm monitor (1920 x 1080 resolution) with a 90° geometric field of view. The physical field of view was approximately 53°. In both conditions the VE was rendered by a Dell computer equipped with an Intel HD 530 graphics card.
Structural displacement monitoring using deep learning-based full field optical flow methods
Published in Structure and Infrastructure Engineering, 2020
Chuan-Zhi Dong, Ozan Celik, F. Necati Catbas, Eugene J. O’Brien, Su Taylor
When comparing the image processing time, it takes 1.7 seconds to calculate the full field optical flow of two images with a resolution of 1280 × 960 pixel using FlowNet2. The computation time is accelerated by a Graphics Processing Unit (GPU) on a Linux system (Ubuntu 18.04) with the AMD Ryzen 5 2600X CPU, 16 Gb RAM, and the NVIDIA GeForce GTX 1080 Graphics Card. It takes about 1600 seconds for the same operation on the same system using Classic + NL. The Classic + NL used in this study is the same as that of Khaloo and Lattanzi and does not implement GPU acceleration. During this experiment, 1159 images were collected, and it took about 32.8 minutes using FlowNet2 to calculate the full field optical flow of the image sequence whereas it took about 21.5 days for Classic + NL.