Explore chapters and articles related to this topic
NoC Topology
Published in Marcello Coppola, Miltos D. Grammatikakis, Riccardo Locatelli, Giuseppe Maruccia, Lorenzo Pieralisi, Design of Cost-Efficient Interconnect Processing Units, 2020
Marcello Coppola, Miltos D. Grammatikakis, Riccardo Locatelli, Giuseppe Maruccia, Lorenzo Pieralisi
Edge bisection width refers to network wire density. It is defined as the minimum number of edges that must be cut to separate the network into two equal halves (within one node). More specifically, a network cut C(N1, N2) is a set of channels that partition all N network nodes into two disjoint sets N1 and N2. If the disjoint sets have the same cardinality (within one node), i.e. |N2| < |N1| < |N2| + 1, then the cut is called bisection. This is an important metric, since the rate at which information crosses the network bisection (called bisection bandwidth) is the product of the bisection width (number of links), the number of wires at each link (link width) and the data transfer rate of link (link bandwidth). For continuous routing with independent, uniform random traffic, a large bisection increases VLSI complexity, but provides nice distribution among different paths, thus reducing communication bottlenecks, i.e. latency and saturation rate.
Implementation of Distributed Algorithms for Finite Element Analysis on a Network of Workstations
Published in Hojjat Adeli, Sanjay Kumar, Distributed Computer-Aided Engineering, 2020
The three dominant factors which determine how fast two workstations can communicate are: latency, bisection bandwidth, and network topology. Latency is the overhead time required for the system to send and receive a message of zero length. Bisection bandwidth measures the speed with which data can be sent between workstations. A good parallel machine has low latency and high bandwidth. There is always some loss of efficiency due to the collision of packets on the network. Frequency of collision is influenced by the topology of the interconnection network and becomes the dominant factor as the number of workstations grows. For ethernet connected workstations, there is no parallelism in communication due to the bus-like nature of the network topology. If two workstations send certain data packet over the network at the same time, their requests will be serialized leading to varying delays because there can be only one message on the bus at a time.
Representative Example of a High Performance Embedded Computing System
Published in David R. Martinez, Robert A. Bond, Vai M. Michael, High Performance Embedded Computing Handbook, 2018
After the Doppler filtering shown in Figure 2-3, the data had to undergo yet another corner turn to gather all the channel data prior to the adaptive beamforming stage. The corner turn significantly stresses the capabilities of any parallel processor, so a critical measure of signal processor capability is what is referred to as system bisection bandwidth (Teitelbaum 1998). In simple terms, bisection bandwidth is a measure of how much data flows from one half of the processor to the other if the system is figuratively “bisected.” For many of the classes of complex processing described in this chapter, the desired bisection bandwidth (as a rule of thumb) is about 1/10th in bytes per second of the total system computation in operations per second. For example, if a system requires a total system computational throughput of 1 TeraOps, then the approximate minimum bisection bandwidth is approximately 100 gigabytes/s. This is only an empirical rule of thumb that will vary from application to application, but it serves as a general metric of expected capability from the HPEC system, useful for efficiently balancing computation with real-time communication.
Performance centric design of subnetwork-based diagonal mesh NoC
Published in International Journal of Electronics, 2019
Tuhin Subhra Das, Prasun Ghosal
Bisection bandwidth () which is a product of channel bandwidth () and network bisection channels (), refers to the minimum aggregate bandwidth across chip backplane. Increasing this bisection bandwidth greater the network performance. However, it renders more area penalty. On the contrary, for the same area and theoretical throughput, if we fixed the then channels width will be widened with reducing the number of that surely lowers the packet serialisation delay, but at the same time, it decreases the channel utilisation ratio. Another disadvantage is that fewer this number of links makes a network more vulnerable and unpredictable. So a trade-off is required here. A detailed study among these static network parameters is also listed in Table 2.