Explore chapters and articles related to this topic
NIC-Based Parallellism
Published in Heqing Zhu, Data Plane Development Kit (DPDK), 2020
Jingjing Wu, Xiaolong Ye, Heqing Zhu
The RSS configuration distributes packets evenly based on the number of receive queues. Queue 3 receives only a specific UDP data flow, and this is done by the Flow Director configured in the previous step, which will assign the specified UDP packets into Queue 3. RSS needs to be reconfigured; otherwise, RSS will balance the remaining non-specific UDP traffic to all queues (including Queue 3). The below step will remove Queue 3 out of RSS table. // Configure the hash value queue number mapping table. // The table size is 128 for 82599 NIC. struct rte_eth_rss_reta_entry64 reta_conf[2]; int i, j = 0; for (idx = 0; idx < 3; idx++) { reta_conf[idx].mask = ~0ULL; for (i = 0; i < RTE_RETA_GROUP_SIZE; i++, j++) { if (j == 4) j = 0; reta_conf[idx].reta[i] = j; } } rte_eth_dev_rss_reta_query(port, reta_conf, 128);
System Implementation
Published in Robert F. Hodson, Abraham Kandel, Real-Time Expert Systems Computer Architecture, 1991
Robert F. Hodson, Abraham Kandel
As mentioned previously, the priority data associated with each queue is stored in a priority fifo. The ordering of queue numbers in the fifo is from high to low priority. When the AP requests data from the APQ, the highest priority queue is tested for data by addressing its empty flag with the first element in the fifo. If the queue is not empty, an element from the associated queue is removed and loaded into the AP interface. If the queue is empty, the fifo is cycled to access the queue with the next highest priority. This process will continue until a non-empty queue is found. This method insures that the queues are always tested in priority order. After the data is sent to the AP, the priority fifo is cycled until the highest priority queue number is back at the front of the fifo.
Other Techniques Essential for Modern Reliability Management: II
Published in Edgar Bradley, Reliability Engineering, 2016
Then L = average number of units in the system, that is, number in the queue + number being served L=λ/(μ−λ)
An intelligent charging navigation scheme for electric vehicles using a cloud computing platform
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2020
Hanlin Shao, Guofang Zhang, Miao Xia
First, we simulate a large city road network. It is assumed that the area of the road network is 900 km2, that is, an urban traffic network of 30 km*30 km, as shown in Figure 2. The network has 23 traffic nodes and 33 road sections. The weight between roads represents the distance between nodes, and the unit is km. The distance data between roads can be downloaded from the map and stored in the cloud computing platform. In the traffic network, there are four charging stations distributed, respectively, at nodes 8, 11, 17, and 20. The number of charging points and the queue number of each charging station are different. Our added value to this routing problem is proposing an approach to find the shortest path for multimodal transportation network (Idri et al. 2017). In MATLAB, the simulated urban road network in Figure 3 can be expressed programmatically.
Inventory allocation in robotic mobile fulfillment systems
Published in IISE Transactions, 2020
Tim Lamballais Tessensohn, Debjit Roy, René B.M. De Koster
The pick and replenishment queues are load-dependent queues and for both queues it holds that the largest service time happens when one pod is at the queue, since then only one workstation is operating at a time. The service time is denoted by for the pick queue and by for the replenishment queue. As described earlier in Section 3.4, for i > 1, therefore using as the time picking takes provides an upper bound on the picking time needed. gives an upper bound for replenishment. Per time unit and per SKU s, the pod goes to the pick queue θs times, where each time it will spend on average at most time units. This means that per time unit, a pod is, on average, busy with picking at most time. Furthermore, per time unit the pod goes to the replenishment queue number of times, where each time it spends, on average, at most time units. This means that per time unit, a pod will on average be busy with replenishment at most time. Let denote the percentage of a time unit that a pod is used. In a stable system, this all needs to fit within one time unit, therefore the order arrival rates needs to be in the set as depicted in Equation (14):
Optimal scheduling and power allocation in wireless networks with heavy traffic: the infinite time horizon case
Published in International Journal of Systems Science: Operations & Logistics, 2018
Let L(t), , be a purely discontinuous process with values in {1,… , J}. We assume that L(t) admits a known stationary (or steady-state) probability distribution Π = {π1,… , πJ}. That is, L(t) converges weakly to Π as t → ∞ independently of its initial distribution. The process L(t) is interpreted to model a multidimensional channel connecting a single base station transmitter to a fixed number K of mobile users in the downlink configuration (see Figure 1). Thus any value j from the state space {1,… , J} of the process L(t) determines the gains of all the channels of the system. Besides the channel process, there are data stochastic processes arriving at the base station from the sources (the users) for the destinations (the mobiles). Data is measured in bits; it arrives at the base station and is queued until transmitted. A base station queue is assigned to each source–destination pair. The expected rate of transmission from queue k given that the channel is in the jth state is given by the capacity formula. It may be written in compact form as a function, labelled by the queue number k, of 2K arguments: the time-sharing factors and the powers, namely qk(t) is the kth queue length, k ∈ {1,… , K}. The operating time horizon is divided into time slots, and a fixed amount of power is available at each time slot. δk(qk(t), j) and Pk(qk(t), j) denote, respectively, a fraction of the time slot and the power allocated to queue k. Theses resources are allocated in accordance with the queue length and the channel state, and in such a way that the expectation of the cost – which is the aggregation of the instantaneous total queue length over the time horizon – is minimised. This is equivalent to minimise the expected total delay for the users.