Explore chapters and articles related to this topic
Unification of Artificial Intelligence with Optimization
Published in Jay Liebowitz, The Handbook of Applied Expert Systems, 2019
Usually, a large-scale IP model is not easy to solve. So, finding a series of solution algorithm is an important issue in solving the IP problem. One of the most popular approaches to solving IP models is Lagrangian Relaxation (Fisher, 1981). Lagrangian relaxation is a scheme of relaxing the portion of constraints that makes the model too complex to solve, and finding a tight bound with the relaxed problem to evaluate the performance of some heuristic algorithms. UNIK-RELAX identifies the structural characteristics of the IP model to determine which constraints should be relaxed to generate the Lagrangian model. For the identification of the structure, the relationship among the embedded structures, constraints, and blocks of terms are identified as depicted in Figure 6 (Kim and Lee, 1996).
Integer Programming
Published in Michael W. Carter, Camille C. Price, Ghaith Rabadi, Operations Research, 2018
Michael W. Carter, Camille C. Price, Ghaith Rabadi
A number of other approaches have been developed and refined over the years. Cutting plane and cover inequality methods repeatedly introduce new constraints into integer problems in order to exclude non-integer extreme points from the feasible region, and then use simple LP solutions to locate the optimum, which then occurs at an integer point. Lagrangian relaxation incorporates constraints into the objective function by placing a penalty on any violated constraint. Any solution that violates a constraint has a lower value than a solution with no constraint violation. The penalties must be chosen appropriately for the given problem. The technique of column-generation is applicable to problems such as vehicle routing and workforce scheduling, in which customers or workers must be assigned to trucks or work patterns. Incomplete initial solutions are iteratively built up into complete optimal solutions.
Strategic bidding for a price-maker hydroelectric producer: Stochastic dual dynamic programming and Lagrangian relaxation
Published in IISE Transactions, 2018
Gregory Steeger, Timo Lohmann, Steffen Rebennack
Lagrangian relaxation is typically used to make difficult problems tractable by moving complicating constraints from the constraint set into the objective function and, thus, creating a relaxation of the original problem. In many cases, Lagrangian relaxation is used to solve large problems by decomposing them into a number of solvable independent subproblems. In stochastic programming, a well-established method is the application of Lagrangian relaxation by dualization of the nonanticipativity constraints. In combination with a branch-and-bound scheme, an exact method can be obtained for stochastic integer programming problems (Care and Schultz, 1999). However,Figure 1since Lagrangian relaxation solves a relaxed problem, the solution it provides may not be optimal to the original problem. Lagrangian relaxation is only guaranteed to provide bounds on the original problem (Geoffrion, 1974). The tightest bound one can find through Lagrangian relaxation is achieved by minimizing (when the original problem is a maximization problem) the resulting Lagrangian objective function. Unfortunately, the minimization problem is seldom easy to solve. Instead of explicitly solving the minimization problem to obtain the optimal set of Lagrangian multipliers, practitioners use subgradient methods (Polyak, 1969; Nowak and Römisch, 2000; Wang, 2009), surrogate subgradient methods (Zhao et al., 1999), bundle methods (Kaskavelis and Caramanis, 1998) or progressive hedging with a Frank–Wolf method (Boland et al., 2017).
Lagrangian relaxation method for optimizing delay of multiple autonomous guided vehicles
Published in Transportation Letters, 2018
One of the heuristic approaches to handle optimization problems is Lagrangian relaxation being efficient in obtaining a lower bound for the problem. The aim of an optimization problem is to minimize or maximize one or some objective functions considering some constraints while constraints are effective directly on the feasible solution space. Lagrangian relaxation approach relaxes one or some constraints, multiply them by a coefficient and put them in the objective function (as a penalty function) to extend the feasible solution space and increase the possibility of obtaining optimal solutions. Therefore, one or some of constraints should be determined as HARD constraints to be removed from the initial proposed mathematical model. Among the constraints proposed in ‘Mathematical notations’, constraints 5 and 6 are determined to be hard since the number of these constraints are much more than other and make difficulty for the optimization algorithm to obtain optimal solutions. These two constraints are added to the objective function by Lagrangian multipliers as penalties so that the hard constraints in the mathematical model are relaxed.
A framework for big data pre-processing and search optimization using HMGA-ACO: a hierarchical optimization approach
Published in International Journal of Computers and Applications, 2019
K. V. Rama Satish, N. P. Kavya
Lagrangian relaxation is a common procedure to generate variable bounds during optimization with complex constraints. It is generally accepted that a solution to the relaxed problem is an approximate solution to the original problem, therefore it is safe to claim that a critical point of the relaxed problem is an approximate critical point of the original problem, and should provide as useful information. The original problem in the Lagrangian relaxation can be expressed as follows,