Explore chapters and articles related to this topic
Traffic Control
Published in Naoaki Yamanaka, High-Performance Backbone Network Technology, 2020
Scheduling-based algorithms (e.g., Weighted Fair Queueing (WFQ) [4] and its variants [5]-[7]) are known to be ideal as mechanisms allocating bandwidth in a fair way and providing guaranteed QoS. In these approaches, however, must be maintained a separate queue for each flow and state is maintained on a perflow basis, so that hardware efficiency is out of the question. More precisely, WFQ has a computational complexity of 0(log(n)), where n is the number of flows currently queued at a router. WFQ is hard to implement in high-speed backbone routers with trunks that carry large numbers of flows.
QoS and MPLS
Published in Nam-Kee Tan, MPLS for Metropolitan Area Networks, 2004
Weighted fair queuing (WFQ) offers a more lenient scheduling approach to PQ through fair allocation of bandwidth among queues. It is based on a relative bandwidth applied to each of the queues. Class-based weighted fair queuing (CBWFQ) extends the standard WFQ functionality to guarantee bandwidth or throughput to classes. In other words, a queue is reserved for each class, and traffic belonging to a class is directed to that class queue. CBWFQ guarantees bandwidth according to weights assigned to traffic classes. Active classes can also access unused bandwidth based on their weights.
Switched Ethernet in Automation
Published in Richard Zurawski, Industrial Communication Technology Handbook, 2017
Gunnar Prytz, Per Christian Juel, Rahil Hussain, Tor Skeie
A managed switch usually has four or more priority queues (e.g., critical, high, normal, and background) allowing the scheduling of packets at egress. The mapping of the CoS priorities to the internal priority levels is configurable on the switch. There are different scheduling algorithms applied, for example, strict priority, weighted fair queuing, and weighted round robin. Some managed switches can also use Layer 3 information, the differentiated services code point (DSCP) information in IPv4 and IPv6 headers, to map to the internal priority levels.
Differentiated service policy in smart warehouse automation
Published in International Journal of Production Research, 2018
Zijian He, Vaneet Aggarwal, Shimon Y. Nof
The probabilistic queuing policy and the differentiated objective function in this research are motivated by the policies in networking and cloud storage research, which focuses on digital storage. For instance, a start-time fair queuing algorithm for packet switching network (Goyal, Vin, and Chen 1997), an efficient fair queuing algorithm for packet switching network (Stiliadis and Varma 1998), three-level approaches for differentiated services in measuring Web quality of service (Lee and Park 2009), joint optimisation of encoded chunks placement while optimising scheduling policy of erasure-coded storage system with arbitrary service time distribution, and consisting of multiple heterogeneous files (Xiang et al. 2014, 2016). Probabilistic scheduling is used by the latter, and the placement and access of contents are optimised for minimising access latency in distributed storage systems. This particular idea is extended in this article for physical warehouse systems, where we use a probabilistic strategy for order pickup. Differentiated services have been considered in many areas, including wireless networks (Chen and Mohapatra 1999; Veres et al. 2001; Le, Hossain, and Alfa 2006), cloud computing (Grit and Chase 2008; Li 2009; Rao et al. 2013), distributed scheduling (Jin, Chase, and Kaur 2004; Shue, Freedman, and Shaikh 2012; Aggarwal et al. 2017; Xiang, Lan, et al. 2017), machine scheduling (Lenstra, Kan, and Brucker 1977; Weng, Lu, and Ren 2001; Murray, Chao, and Khuller 2016), supply chain (Morash and Clinton 1998; Hilletofth 2009), smart grids (Deshpande, Kim, and Thottan 2011; Bitar and Low 2012; Negrete-Pincetic and Meyn 2012). The differentiated services have been studied even dating back to 1950s (Smith 1956).
Proactive flow control using adaptive beam forming for smart intra-layer data communication in wireless network on chip
Published in Automatika, 2023
Dinesh Kumar T.R., Karthikeyan A.
The packets are divided into flits, which are smaller pieces. The header flit is the initial flit of a packet, and it contains control information for packet delivery such as the source address, destination address, operation type, packet type, role, priority, and payload size. The path flit is the second flit of the packet, and it provides the path information along with the packet's sequential number in the current transaction between the source and destination. The body flit, also known as the payload, is the third flit of the packet and contains the actual data to be transferred to the destination. The packet format, as shown in Figure 2, is made up of nine fields, including the Source (Sc), which is the communication's initiator. The destination core address is indicated by the letter DC. The type of transaction (read, write, conditional write, broadcast, etc.) is indicated by the operation (Op). The type of information being exchanged (such as data, instructions, or signal types) is indicated by type. The source component's role (e.g. user, root, etc.) is indicated by the role. The priority of traffic is classified by priority. The packet payload, or the number of bytes in the payload, is indicated by size. Payload denotes the actual data information created by the source core, whereas Path denotes the packet's registered path. PF. SDC assigns a proportional weight to each data flow to suit service needs. This is accomplished by setting the priority field in the header file to 0 or 1. Normal or Low Priority (LP) data traffic is coded as 0, while emergency or High Priority (HP) data traffic is coded as 1. Emergency traffic needs superior service. As a result, priority in procuring network resources is usually given to emergency or real-time traffic for rapid and reliable transmission. The most extensively used packet scheduling algorithms are Weighted Fair Queuing (WFQ) [29], Weighted Round Robin (WRR) [30], and Strict Priority (SP) [28]. A tight scheduling discipline with a probabilistic priority (SPP) queuing mechanism is used in the suggested model to cope with the starving problem and make priority discipline adjustable. SPP distinguishes itself from conventional scheduling algorithms by being “simple to develop and configure, effectively utilizing existing bandwidth and requiring very little memory and processing power.” SPP looks at how likely it is that a queue will be served and offers different levels of service based on traffic by giving each queue parameters (high or low priority).