Explore chapters and articles related to this topic
Input–Output Organisation
Published in Pranabananda Chakraborty, Computer Organisation and Architecture, 2020
InfiniBand is essentially a high-speed link having especially typical I/O specifications for data flow among processors and intelligent I/O devices with large storage configurations, mainly for use in cluster system architecture (discussed in detail in Chapter 10) in which a number of computer systems are to be connected together, which ultimately provides a single-system image. This product is eventually an outcome of the merger of two competing projects aimed at the high-end server market: future I/O (backed by HP, Cisco, Compaq, and IBM) and next-generation I/O (developed by Intel, Microsoft, and Sun, and supported by a number of other companies). InfiniBand was originally envisioned as a comprehensive interface only for networking of storage area with low latency, high bandwidth, low-overhead interconnect for commercial datacentres, although it might perhaps only connect servers and storage to each other, while leaving more local connections to other protocols and standards like PCI.
*
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
Typical link bandwidths for some COTS fabrics are shown in Table 14-3. Serial RapidIO is a board-to-board and intraboard fabric used by Mercury Computer Systems on their VXS and compact PCI systems. It is also supported by Freescale, with RapidIO network interfaces embedded in some PowerPCs. PCI Express was developed for use within PCs to provide higher bandwidth access between the CPU and bus peripherals such as the graphics processing unit (GPU), hard drives, USB devices, etc. It is supported on VXS systems, compact PCI, and ATCA (Advanced Telecom Computing Architecture), a standard similar to VXS supported within the telecommunications industry. InfiniBand is a box-to-box fabric aimed at cluster computing and processor-to-storage interconnects. It is also supported on both VXS and ATCA.
Storage Access Methods
Published in Al Kovalick, Video Systems in an IT Environment, 2013
InfiniBand is an ultra-low-latency, non-IP, communication, storage, and embedded interconnect. InfiniBand, based on an industry standard, provides a robust data center interconnect. With 30 Gbps and 60 Gbps link products currently shipping, InfiniBand is at least a generation ahead of competing fabric technologies today. It was developed to cluster servers in data centers. It is considered exotic technology and is sometimes found at the high end of computing configurations. One of the leaders in InfiniBand-based products is Mellanox (www.mellanox.com), which offers a single unit switch with 24 ports and a throughput of 60 Gbps per port. InfiniBand links use 8 b/10 b encoding, so the payload rates are 80 percent of the line rates. See Appendix E.
A method for in-field railhead crack detection using digital image correlation
Published in International Journal of Rail Transportation, 2022
Knut Andreas Meyer, Daniel Gren, Johan Ahlström, Anders Ekberg
Without a speckle pattern, a resolution of 123 pixels/mm was used. Measuring a 30 mm wide band thus requires approximately 0.5 megapixels/mm. This pixel density corresponds to 0.5 MB/mm for 8-bit greyscale images. A train moving at 100 km/h will then produce about 14 GB/s per camera. High-performance network communication standards, such as HDR InfiniBand, surpass this requirement by achieving 50 GB/s. Another concern is the data amounts generated. The uncompressed raw data from characterizing a 500 km railway line with four cameras is 1000 TB. Such amounts can be stored by using a sufficient number of drives. However, permanently storing the raw data is not necessary. At the end of a measurement series, the data can be moved to a stationary computer resource for processing, after which only the result must be stored. For example, a damage indicator could be stored with a 1 m resolution. Such data produces 2 MB for 500 km if single-precision floats are used.
Employing multi-GPU power for molecular dynamics simulation: an extension of GALAMOST
Published in Molecular Physics, 2018
You-Liang Zhu, Deng Pan, Zhan-Wei Li, Hong Liu, Hu-Jun Qian, Yang Zhao, Zhong-Yuan Lu, Zhao-Yan Sun
We have also made a benchmark of patchy particles on a cluster located in Dalian Institute of Chemical Physics, Chinese Academy of Sciences. Each computing node of the cluster is equipped with two NVIDIA Tesla K20m GPUs. The InfiniBand is used to transmit data between nodes. The InfiniBand bandwidth is up to 56 GB/s. We test a system of 1.536 million two-patch particles. The reasons for choosing the patchy particle benchmark are: (i) it is very computationally expensive and (ii) it is a main feature of GALAMOST. Because of (i), multi-GPU runs are critical for large temporal and spatial scales which are intensely needed for the self-assembly of patchy particles. Because of (ii), we want to highlight the ability of GALAMOST. As we can see from Figure 11, essentially ideal scaling holds up to 32 GPUs (two GPUs per node) for both single precision and double precision, beyond which the scaling shows decreased efficiency. In most cases, several or dozens of GPUs are used in daily studies, where GALAMOST presents a very good performance.