Explore chapters and articles related to this topic
The Future X Network
Published in Marcus K. Weldon, The Future X Network, 2018
The two elements — ultra-high capacity and ultra-low-latency — are related. One primary cause of this latency is the speed of light, which induces 4.5ms of latency for every 1000km, and so requires a proximity of ~100km or less to support a (round trip) response time of 1ms. The other primary cause of network latency is the delay induced by network hops, when packets are queued for delivery over an interface that has lower capacity than the sum of the input flows. This queuing delay is less than a millisecond on average, but in times of severe congestion can amount to 10s of milliseconds, which will significantly compromise the performance of latency-sensitive services. In order to offer low-latency service guarantees, one must minimize the number of network hops and maximize the available bandwidth. These dual requirements essentially mandate the creation of edge cloud nodes and ultra-high-capacity access and metropolitan aggregation networks providing the required “onramp” connectivity to these nodes.
Performance Metrics and Enabling Technologies
Published in F. Richard Yu, Tao Huang, Garima Ameta, Yunjie Liu, Integrated Networking, Caching, and Computing, 2018
F. Richard Yu, Tao Huang, Garima Ameta, Yunjie Liu
The service latency metric refers to the delay induced by preparation and propagation of data packets in the system. Since this chapter only focuses on radio access networks in the networking part, the technologies discussed in this chapter only involve three types of latencies that are specified as follows.Propagation DelayAs the primary source of latency, propagation delay is defined as a function of how long it takes information to travel at the speed of light in wireless channels from origin to destination.Serialization DelaySerialization is the conversion of bytes (8 bits) of data stored in a device’s memory into a serial bit stream to be transmitted over the wireless channels. Serialization takes a finite amount of time and is calculated as Serialization delay = packet size in bits / transmission rate in bits per second.Queuing DelayQueuing delay refers to the amount of time a data packet spends sitting in a queue awaiting transmission due to over-utilization of the outgoing link.
Fairness and Bandwidth Allocation
Published in Liansheng Tan, Resource Allocation and Performance Optimization in Communication Networks and the Internet, 2017
FAST TCP [172,290,334,352] is a new TCP congestion control algorithm for high-speed long-distance networks; it aims to rapidly stabilize networks into steady, efficient, and fair operating points. It uses queuing delay, in addition to packet loss, as a congestion signal. Queuing delay provides a finer measure of congestion and scales more naturally with network capacity than packet loss probability does [172]. Using the queuing delay as a congestion measure in its window-updating equation [334] allows FAST TCP to overcome difficulties [291] encountered by currently used algorithms (such as TCP Reno [292]) in networks with large bandwidth-delay products.
Towards complex dynamic fog network orchestration using embedded neural switch
Published in International Journal of Computers and Applications, 2021
K. C. Okafor, G. C. Ononiwu, Sam Goundar, V. C. Chijindu, C. C. Udeze
Due to the massive nature of communication pattern in the HDN, a smart Fog layer is needed. Without this layer, there will be frequent high link utilization and workload congestion at aggregation or core layers [8]. This is because, data center networks leverage multiple parallel paths connecting end host pairs to offer high bisection bandwidth for cluster computing applications [9]. Those congested heavy flows often lead to the unavoidable breakdown of the commodity servers while some specialized links within HDN observe higher loss ratio than others [10]. In such networks, Equal Cost Multipath (ECMP) routing protocols seem to be unaware of the traffic–workload due to static flow-to-link assignments which usually causes bandwidth loss arising from flow collisions. While high resource utilization is favorable to service providers partly, network congestion can cause harmful queuing delay and packet loss, and thus affects the network throughput. These consequences could significantly degrade application performance and user experience. Figure 1 shows how the Fog Ethernet switch components are used for connections leading to latency and congestion management similar to a traditional DCN network storage [11,12]. In this case, it is used for the separation of storage, network and HPC traffic into separate virtual fabrics using shared memory partitions. This is useful in a well-managed Fog data center network in order to achieve overall performance efficiency in deployment contexts.
A Routing Technique for Enhancing the Quality of Service in Vanet
Published in IETE Journal of Research, 2023
Arindam Debnath, Habila Basumatary, Mili Dhar, Bidyut K. Bhattacharyya, Mrinal Kanti Debbarma
Various routing protocols have been designed to overcome the above-mentioned problems in VANET. For example, Global State Routing (GSR) [15], Greedy Perimeter Stateless Routing (GPSR) [16] and Greedy Perimeter Coordinator Routing (GPCR) [17] select the minimum distance path between source vehicle to destination vehicle. But all these protocols suffer from the frequent link failure problem and these results in a short time End to End communication. This failure has occurred very frequently between a source and a data forwarding vehicle, due to the problem of the selection of neighbor forwarding vehicle. On the other hand, Greedy Traffic-Aware Routing (GyTAR) [18], Anchor-based Street and Traffic-Aware Routing (A-STAR) [19], Stable CDS-Based Routing Protocol (SCRP) [20] always forward the data packets through the well-connected road. A backbone/guard node is used in the road junction, which is responsible for directing the data packet to the actual destination. In the case of multiple pairs of source and destination, the intermediate vehicles are sending their data using this backbone node on the road junction. This results in data congestion and queuing delay. As a result, huge numbers of packets are dropped in between the communication. Recently, Named Data Networking (NDN) technology [21] is used for addressing the above-mentioned problem in vehicular environments. In NDN, the content store module caches which are sent or received contents which can improve the network performance by eliminating the redundancy of IP based network. But our infrastructure-less proposed routing method is mainly focusing on minimizing the link breakage problem between the source and intermediate vehicles and maintaining the long-time End to End communication during the data communication in TCP/IP-based vehicular network.
Resisting bad mouth attack in vehicular platoon using node-centric weight-based trust management algorithm (NC-WTM)
Published in Connection Science, 2022
It is defined as the overall time occupied through data to move a platoon from the source vehicle to the destination vehicle. End-to-end network delay is caused by various factors, including transmission delay, propagation delay, processing delay, and queuing delay. For TCP connections, application relays can minimise end-to-end delays and enhance performance. Additional features, such as application relays, media transcoding, and mixers can be added to increase the performance of broadcasting between participants in the overlay.