Explore chapters and articles related to this topic
Aerial Imagery Registration Using Deep Learning for UAV Geolocalization
Published in Mahmoud Hassaballah, Ali Ismail Awad, Deep Learning in Computer Vision, 2020
To be able to carry out the experiments, many libraries and configurations were tried before settling on the current setup. Naturally, the operating system chosen was Ubuntu*, a Debian-based Linux operating system that is available for free and is supported on many laptops, servers, and embedded development kits such as NVIDIA Jetson Tx1/Tx2†. As deep learning is involved in this work, the GPU performance was harnessed by using CUDA 8§ and cuDNN v5¶. The code was mostly written in C++ to be able to cross-compile the code on multiple platforms and obtain fast execution [36]. The Python programming language was also used to integrate with deep learning libraries through Python wrappers. To train the network, IRISA-UBS Lab Cluster‡ was used. The lab has the following specifications: Intel Xeon CPU E5-2687W (3.10 GHz × 20 cores) and NVIDIA 900 GTX TitanX.
Lidar & Camera Based Assistance System for the Visually Challenged
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
NVIDIA Jetson Nano Developer Kit is a small power computer that can run many neural networks in parallel for the image classification, object detection, segmentation and speech processing. It requires only 5watt power to carry out all of these tasks.
Autonomous industrial assembly using force, torque, and RGB-D sensing
Published in Advanced Robotics, 2020
James Watson, Austin Miller, Nikolaus Correll
Control of the system is provided by a Robotic Materials SmartHand. The SmartHand is an integrated computing, vision, and parallel gripper platform designed for use with serial, collaborative [12] robots. It integrates an nVidia Jetson TX2 computer for control and image processing. Visual sensory data comes from an Intel RealSense D430 that is mounted in the palm. The RealSense is a structured-light camera that projects two infrared patterns into the field of view of the image sensor, and calculates per-pixel depth using interferometry, resulting into a Red–Green–Blue plus Depth image (RGB-D). Both RGB-D and infrared images can be obtained from the sensor. The camera and computer are mounted to an internal frame that provides passive cooling. Smarthand interacts with its environment through a parallel gripper. The fingers of the gripper have a tapered profile that mimics the beak of a crow, a design choice based on the renown of corvids for dextrous manipulation [13]. Also like a crow, the vision sensor's field of view is in the immediate workspace of the gripper, allowing the gripper to see assembly objects as close as 11 cm.
Real-time reading system for pointer meter based on YolactEdge
Published in Connection Science, 2023
Chengjun Yang, Ruijie Zhu, Xinde Yu, Ce Yang, Lijun Xiao, Scott Fowler
Balancing system compatibility and accuracy is a challenge when using an end-to-end model for meter readings. To address this, we propose using a lightweight model for adding gauge types quickly and a highly accurate instance segmentation model for reading the gauges. To ensure stability, we deployed our system on the NVIDIA Jetson Xavier NX, a low-power edge computing device, and optimised it using NVIDIA TensorRT provided by NVIDIA.