Explore chapters and articles related to this topic
Traffic Control
Published in Naoaki Yamanaka, High-Performance Backbone Network Technology, 2020
Weighted Round Robin (WRR) scheduling is an extension of round robin scheduling. Because of its simplicity and bandwidth guarantee, WRR cell scheduling is commonly used in Asynchronous Transfer Mode (ATM) switches. However, since cells in individual queues are sent cyclically, the delay bounds in WRR scheduling grow as the number of queues increases. This chapter shows that the burstiness generated in the network is an even greater factor affecting the degradation of delay bounds. In ATM switches with per-class queueing, a number of connections are multiplexed into one class-queue. The chapter proposes a new WRR scheme, namely, WRR with Save and Borrow (WRR/SB), that helps improving the delay bound performance of WRR by taking into account the burstiness generated in the network. It also shows that WRR/SB can provide better delay bounds than WRR and that it can achieve the same target delay bound with a smaller extra bandwidth, while large extra bandwidth must be allocated for WRR.
Neural adaptive admission control framework: SLA-driven action termination for real-time application service management
Published in Enterprise Information Systems, 2021
Tomasz D. Sikora, George D. Magoulas
The framework uses the Not-weighted Round Robin Scheduling (Stallings 2014). Each process is given a fixed time to execute, called a quantum. Once a process is executed for a specified time period, it is pre-empted and another process executes for a given time period (Jensen, Locke, and Tokuda 1985). Each action request is transformed into a process that is decomposed into smaller chunks, which are served by resource controllers, according to the distribution set of the action type definition. Context switching is used to save states of pre-empted processes.5
Proactive flow control using adaptive beam forming for smart intra-layer data communication in wireless network on chip
Published in Automatika, 2023
Dinesh Kumar T.R., Karthikeyan A.
The packets are divided into flits, which are smaller pieces. The header flit is the initial flit of a packet, and it contains control information for packet delivery such as the source address, destination address, operation type, packet type, role, priority, and payload size. The path flit is the second flit of the packet, and it provides the path information along with the packet's sequential number in the current transaction between the source and destination. The body flit, also known as the payload, is the third flit of the packet and contains the actual data to be transferred to the destination. The packet format, as shown in Figure 2, is made up of nine fields, including the Source (Sc), which is the communication's initiator. The destination core address is indicated by the letter DC. The type of transaction (read, write, conditional write, broadcast, etc.) is indicated by the operation (Op). The type of information being exchanged (such as data, instructions, or signal types) is indicated by type. The source component's role (e.g. user, root, etc.) is indicated by the role. The priority of traffic is classified by priority. The packet payload, or the number of bytes in the payload, is indicated by size. Payload denotes the actual data information created by the source core, whereas Path denotes the packet's registered path. PF. SDC assigns a proportional weight to each data flow to suit service needs. This is accomplished by setting the priority field in the header file to 0 or 1. Normal or Low Priority (LP) data traffic is coded as 0, while emergency or High Priority (HP) data traffic is coded as 1. Emergency traffic needs superior service. As a result, priority in procuring network resources is usually given to emergency or real-time traffic for rapid and reliable transmission. The most extensively used packet scheduling algorithms are Weighted Fair Queuing (WFQ) [29], Weighted Round Robin (WRR) [30], and Strict Priority (SP) [28]. A tight scheduling discipline with a probabilistic priority (SPP) queuing mechanism is used in the suggested model to cope with the starving problem and make priority discipline adjustable. SPP distinguishes itself from conventional scheduling algorithms by being “simple to develop and configure, effectively utilizing existing bandwidth and requiring very little memory and processing power.” SPP looks at how likely it is that a queue will be served and offers different levels of service based on traffic by giving each queue parameters (high or low priority).