Explore chapters and articles related to this topic
Goal Programming
Published in Albert G. Holzman, Mathematical Programming, 2020
In goal programming, instead of attempting to maximize or minimize the objective criterion directly, as in linear programming, the deviations between goals and what can be achieved within the given set of constraints are minimized. In the simplex algorithm of linear programming such deviations are called slack variables. These variables take on a new significance in goal programming. The deviational variable is represented in two dimensions, both positive and negative deviations from each subgoal or goal. Then the objective function becomes the minimization of these deviations based on the relative importance or priority assigned to them.
The Simplex Method
Published in Shashi Kant Mishra, Bhagwat Ram, Introduction to Linear Programming with MATLAB®, 2017
Shashi Kant Mishra, Bhagwat Ram
The aim of the simplex algorithm is to move from one basic feasible solution to another until an optimal basic feasible solution is found and the value of objective function continually decreases until a minimum is reached. A basic feasible solution is optimal if and only if the corresponding reduced cost coefficients are all non-negative.
Introduction to Algorithms and Data Structures
Published in Sriraman Sridharan, R. Balakrishnan, Foundations of Discrete Mathematics with Algorithms and Programming, 2019
Sriraman Sridharan, R. Balakrishnan
The simplex algorithm to solve a linear programming problem takes exponential time in the worst case (Klee-Minty examples), but this algorithm behaves very well in practice, that is, it works as if it is an efficient algorithm. The simplex method takes linear time on the average (see [9]). Further, an exponential time algorithm may be useful if the size of the problem is bounded above by a suitable constant, that is, n is sufficiently small. To illustrate this assertion, consider two algorithms of time complexity n3 $ n^3 $ and 2n $ 2^n $ to solve the same problem. Since 2n≤n3 $ 2^n \le n^3 $ for n≤9 $ n\le 9 $ , we may use the exponential time algorithm to solve problems of size less than or equal to 9. Here we have ignored the constants of proportionality c. If we have two algorithms of complexity 1000n2 $ 1000 n^2 $ and 2n3 $ 2n^3 $ to solve the same problem, then the 2n3 $ 2n^3 $ algorithm will be cheaper for all problems of size less than 500.
Linear Programming from Fibonacci to Farkas
Published in Annals of Science, 2021
One of the founders of OR was George Dantzig. During the Second World War he had worked on various operational problems in an ad hoc manner, usually in situations where there were many variables and any solution was acceptable. Subsequently he tried to deal with such problems in a more structured way, encouraged by the achievements of Wassily Leontief, who had obtained significant results on equilibrium in linear models of an economy in which there are many inter-dependent industries. Dantzig realized that the next step was to set up a model in which a linear function was optimized, subject to a set of linear constraints, and by 1947 he had begun to develop a general method for solving such problems. This topic eventually became known as Linear Programming (LP), and Dantzig’s method became the ‘simplex algorithm’.
Improving a primal–dual simplex-type algorithm using interior point methods
Published in Optimization, 2018
T. Glavelis, N. Ploskas, N. Samaras
Linear Programming (LP) is a significant research area in the field of operations research. Linear programs can be found in almost every type of scientific and engineering applications. The first approach for solving linear programming problems (LPs) came from George B. Dantzig who set the fundamental principles. Dantzig proposed the simplex algorithm [1] that starts from a basic feasible solution and moves from one basic feasible solution to an adjacent one until an optimum solution is found. The vast literature of operations research contains an extensive family of simplex-type algorithms. Although the simplex algorithm is widely-used until today, there exist many real-life applications that its performance degrades due to a phenomenon called degeneracy. Simplex-type algorithms may stall at a degenerate vertex for many iterations before moving to another vertex. Degeneracy and redundancy are very common in real-life applications. Consequently, many different approaches were presented in order to overcome degeneracy [2]. Over the last decades, researchers focused on: (i) the reduction of the number of iterations that the simplex algorithm performs [3–5] (for a review, see [6]), (ii) the reduction of the computational work involved in each iteration [7–10] (for a review, see [11]), and (iii) new variants of the simplex algorithm [12–15].