Explore chapters and articles related to this topic
Hybrid Models for the Control and Optimization of Manufacturing Systems
Published in Javier Campos, Carla Seatzu, Xiaolan Xie, in Manufacturing, 2018
Christos G. Cassandras, Chen Yao
Previous work on this model has focused on deriving relationships between the mean system time and lot sizes. Closed-form results can only be obtained for a simple Markovian queueing model [24] or a single M/G/1 queue [18]. For multiclass systems, one can only resort to approximation methods to capture this relationship as in [20,23]. There is no guarantee on the quality of these approximation methods which are mainly used for qualitative analysis and gaining insights, rather than aiming to formally control and optimize the system. An alternative approach based on estimating performance sensitivities and using a ‘surrogate’ optimization process was also proposed in [13]. It is worth noting that lot sizing provides an opportunity to optimize system performance by controlling simple parameters (lot sizes) and is thus an inexpensive alternative to complex manufacturing re-engineering processes. Moreover, by periodically adjusting the parameter values, one can react to random effects in the environment. The problem is also interesting in that it may be viewed from the perspective of either the system trying to optimize the overall average system time while maintaining high server utilization or the individual user (the ith class) trying to optimize its own average system time. This gives rise to a resource contention game which often results in a gap between the system-centric and user-centric optimal solutions. This gap is commonly referred to as the ‘price of anarchy’.
Rip-Up and Reroute
Published in Charles J. Alpert, Dinesh P. Mehta, Sachin S. Sapatnekar, Handbook of Algorithms for Physical Design Automation, 2008
Progressive routing can be seen to be a noncooperative game among the different nets. Each net is routed in a greedy fashion to minimize its own latency. Progressive rerouting bears a remarkable similarity to the problem of making traffic assignments. Recent results in traffic assignment theory shed some light on the efficacy of Nair’s technique. The ratio of the cost of a Nash equilibrium to the overall minimum cost is called the price of anarchy. Roughgarden and Tardos (2002) showed that for single commodity flows where the latency function is a linear function of the congestion, any Nash equilibrium has latency at most 4/3 times the minimum possible total latency. Global routing, on the other hand, is a multicommodity flow where the commodity cannot be split; for general multicommodity flows, the price of anarchy can be exponential in the polynomial degree of the latency function (Lin et al. 2005).
Introduction
Published in Joseph Y.-T. Leung, Handbook of SCHEDULING, 2004
It is well known that Nash equilibria do not always optimize the overall performance of the system and exhibit suboptimal network performance. A classical example is the so-called “Prisoner’s Dilemma” (see, e.g., [4,5]), which describes a 2-players game in which the optimal solution for both players may have the cost significantly better than the optimal strategy of each user independently, that is, in the Nash equilibrium. In order to understand the phenomenon of noncooperative systems, Koutsoupias and Papadimitriou [3] initiated investigations of the coordination ratio, which is the ratio between the cost of the worst possible Nash equilibrium and that of the social (i.e., overall) optimum. In other words, this analysis seeks the price of uncoordinated selfish decisions (“the price of anarchy”). Koutsoupias and Papadimitriou [3,7] proposed to investigate the coordination ratio for routing problems in which a set of agents is sending traffic along a set of parallel links with linear cost functions. This model has been later extended to more realistic models, most notably to more general cost functions (including in particular, queuing functions), and in which the underlying network is arbitrary. This research initialized also the study of properties of Nash equilibria vs. those of social optimum. And so, how to modify Nash equilibria to improve their cost, how to modify the underlying networks to ensure that the coordination ratio will be low, and how to converge to “good” Nash equilibria.
An iterative combinatorial auction mechanism for multi-agent parallel machine scheduling
Published in International Journal of Production Research, 2022
Yaqiong Liu, Shudong Sun, Xi Vincent Wang, Lihui Wang
This section presents a computational study for evaluating the performances of the proposed ICA approach. Regarding the social welfare, we compare the ICA approach with a state-of-the-art decentralised negotiation mechanism inspired by simulated annealing (NMSA) proposed by Lang, Fink, and Brandt (2016) and two centralised approaches: genetic algorithm (GA) (Chaudhry and Elbadawi 2017) and particle swarm optimisation (PSO) (Fang and Lin 2013). Compared with NMSA, the relative growth rate (RGR) is applied to measure the promotion of social welfare. In parallel, we compute the ratio of the social welfare (RSW) of ICA to that of the centralised approaches. This is synonymous with the price of anarchy that measures to what extent the self-interest of the agents deteriorates the solutions compared to the results determined by a central authority with complete information (Roughgarden and Tardos 2007). The performance of the ICA is positively correlated with RCG and RSW. They are calculated by the formulas (5) and (6): where and represent the social welfare obtained by the proposed ICA approach and NMSA, respectively. is a better solution generated by GA and PSO for the same problem instance based on the centralised model in Appendix 1, because the optimal solution should be used as the benchmark.
A reward response game in the blockchain-powered federated learning system
Published in International Journal of Parallel, Emergent and Distributed Systems, 2022
Consider that the federated learning system operates in a centralised control. That is, all devices follow the server's instruction to train their local models. Then the objective of this whole system should be maximising the social welfare, denoted by W, i.e. the difference between the global model accuracy and the training cost among all devices, which is given in Equation (20). Centralised control achieves a better performance than decentralised (game theoretic) control solutions in terms of the social objectives being met. The concept of price of anarchy, which is caused by the devices' selfish behaviours, is used to quantify the loss of efficiency in decentralised game solutions as compared to the optimal centralised control. In the following, we prove that the non-cooperative game played among all devices is a valid monotone utility game. As a result, we obtain a lower bound on the PoA of value 0.5.
Effect of reducing the price of anarchy on fairness in highway resource allocation for individual users
Published in Transportmetrica B: Transport Dynamics, 2019
Much research has been devoted to developing various control strategies to bridge the gap between user equilibrium and system optimum. Roughgarden and Tardos (2002) were the first to quantify the discrepancy between the two. They termed the difference as the price of anarchy, i.e. the efficiency loss of a system due to the selfish behavior of individual agents in the system. Koutsoupias and Papadimitriou (1999) considered a special class of latency functions and proved that the ratio between the best user equilibrium and system optimum in a network with linear latency function is at most 4/3. They further showed that the price of anarchy can be ‘arbitrarily’ large relative to the system optimum. In fact, it would be quite significant if a reduction of 33% of the total system cost can be realized for a transportation network.