Explore chapters and articles related to this topic
Exploiting the Flexibility Value of Virtual Power Plants through Market Participation in Smart Energy Communities
Published in Ehsan Heydarian-Forushani, Hassan Haes Alhelou, Seifeddine Ben Elghali, Virtual Power Plant Solution for Future Smart Energy Communities, 2023
Georgios Skaltsis, Stylianos Zikos, Elpiniki Makri, Christos Timplalexis, Dimosthenis Ioannidis, Dimitrios Tzovaras
There are different types of optimization algorithms which can be categorized based on the nature of the variables (continuous or discrete) or based on the constraints (linear or nonlinear). Hereinafter, the derived main categories are linear programming (LP), mixed-integer linear programming (MILP), nonlinear programming and mixed-integer nonlinear programming. In LP, the objective functions as much as the set of the constraints on the decision variables are linear. Simplex method, a quite common LP algorithm, involves a linear function and several constraints are expressed as inequalities. This algorithm is generating and examining candidate vertex solutions and usually requires a negligible time to find the optimal one. Moreover, the column generation method is a technique which is utilized for solving MILP problems in case of a large number of variables in comparison with the number of constraints. This method is effective, considering the fact that it avoids enumerating all the possible elements like any traditional MILP algorithm. On the other hand, quadratic programming problems are a simple form of nonlinear problems, consisting of a convex or non-convex quadratic function of variables.
Classical Optimization Methods
Published in A Vasuki, Nature-Inspired Optimization Algorithms, 2020
Dynamic programming is a method to solve optimization problems that require decisions to be taken as a sequential flow through the problem. The decisions could be required at different levels of the system. This technique of solving such problems was developed and presented by Richard Bellman [14] in 1954. Physical systems can be modeled as finite state machines where the set of parameters of the system determines the system state. The system is modeled or designed as going from one state to another state in time sequence. The system changes state based on a decision taken at that instant of time. Finally, a function which depends on these parameters attains a value that is determined by the sequence of decisions taken and hence change of states (transformation of state variables) of the system. This value attained by the function could be maximum or minimum but it should be optimal. Examples of such systems could be the production line in a manufacturing plant or maintenance of equipment and consumables in a factory.
State Estimation
Published in Leonard L. Grigsby, Power System Stability and Control, 2017
Jason G. Lindquist, Danny Julian
Another solution method that addresses the state estimation problem is linear programming. Linear programming is an optimization technique that serves to minimize a linear objective function subject to a set of constraints: min{c¯Tx¯}s.t.Ax¯=b¯x¯≥0
A Novel Heuristic Method for Linear Nearest Neighbour Realization of Reversible Circuits
Published in IETE Journal of Research, 2022
Anirban Bhattacharjee, Chandan Bandyopadhyay, Hafizur Rahaman
This phase is used to find the desired qubit ordering for the given circuit. For this purpose, we solve the graph traversal problem using a recursive optimization algorithm known as dynamic programming [30]. It is identical to the divide and conquer principle in partitioning the problem into several smaller sub-problems, but the solutions of these sub-problems are interrelated rather than independent as in the divide and conquer rule. In other words, the dynamic programming approach investigates the results of each sub-problem which can be reused in solving similar other sub-problems by storing the outcomes of the previously solved sub-problems and revoking them later when required. This algorithm uses an optimized recursive function, where repeated calling is avoided by memorizing the results, which leads to a reduction in computational time. Finally, the outcomes of all sub-problems are integrated to obtain the best solution for the given problem. Therefore, a dynamic programming algorithm achieves optimization by finding optimal solutions to the sub-problems.
Objective Functions and Infrastructure for Optimal Placement of Electrical Vehicle Charging Station: A Comprehensive Survey
Published in IETE Journal of Research, 2021
Mohammad Suhail, Iram Akhtar, Sheeraz Kirmani
Linear programming is an optimization technique for a system of linear constraints and a linear objective function. An objective function defines the quantity to be optimized, and the goal of linear programming is to find the values of the variables that maximize or minimize the objective function [46]. This technique can be used to minimize the total costs by taking into account the transportation costs, land cost and other cost as constraints. This technique can also be used to maximize the energy efficiency which is an important parameter while determining the optimal location of a charging station. In this method, at least 2 linear inequality equations are formed based on the constraints and then maximum and minimum conditions are checked in all equations and a graphical representation gives the best operating area for the operation of the charging station.
DC programming and DCA for supply chain and production management: state-of-the-art models and methods
Published in International Journal of Production Research, 2020
Formally, an optimisation problem takes the form where f is a real-valued or a vector-valued function, named the objective function, and S is called the constraint set (or the feasible set). Linear programming is a class of optimisation problems in which f is a linear function and S is a polytope defined by linear constraints. It is the oldest and the simplest class of optimisation problems. However, most of real-world applications are non-linear problems. Non-linear programming (or non-linear optimisation) is a wide class of optimisation problems where the objective function is non-linear or at least one constraint is non-linear. We distinguish two categories in non-linear programming: convex programming (in (1), f is a convex function while S is a convex set) and non-convex programming (either f or S is non-convex). It is worth to note that a combinatorial optimisation problem is a non-convex program, because when the variable x is discrete, the set S is non-convex. The classification of optimisation problems is described in Figure 1.