Explore chapters and articles related to this topic
Multi-Objective Optimization Algorithms for Deregulated Power Market
Published in Sawan Sen, Samarjit Sengupta, Abhijit Chakrabarti, Electricity Pricing, 2018
Sawan Sen, Samarjit Sengupta, Abhijit Chakrabarti
The aim of optimization is to determine the best-suited solution to a problem under a given set of constraints. Several researchers over the decades have come up with different solutions to linear and non-linear optimization problems. Mathematically, an optimization problem involves a fitness function describing the problem, under a set of constraints representing the solution space for the problem. Unfortunately, most of the traditional optimization techniques are centred around evaluating the first derivatives to locate the optima on a given constrained surface. Because of the difficulties in evaluating the first derivatives to locate the optima for many rough and discontinuous optimization surfaces, in recent times, several derivative-free optimization algorithms have emerged. The optimization problem is now represented as an intelligent search problem, where one or more agents are employed to determine the optima on a search landscape, representing the constrained surface for optimization problems [295].
Optimization
Published in Robert B. Gramacy, Surrogates, 2020
where X is usually a hyperrectangle, a bounding box, or another simply constrained region. We don’t have access to derivative evaluations for f(x), nor do we necessarily want them (or want to approximate them) because that could represent additional substantial computational expense. As such, methods described here fall under the class of derivative-free optimization. See, e.g., Conn et al. (2009), for which many innovative algorithms have been proposed, and many solid implementations are widely available. For a somewhat more recent review, including several of the surrogate-assisted/BO methods introduced here, see Larson et al. (2019).
Various Soft Computing Techniques and Their Description
Published in Kaushik Kumar, Supriyo Roy, J. Paulo Davim, Soft Computing Techniques for Engineering Optimization, 2019
Kaushik Kumar, Supriyo Roy, J. Paulo Davim
In conclusion, we had discussed four most widely held derivative-free optimization methods: namely GAs, SA, random search and downhill search. These techniques rely on latest high-speed computers; they all require a significant amount of computation when compared with derivative-based approaches. However, these derivative-free approaches are more flexible in terms of incorporating intuitive guidelines and forming objective functions.
Using first-order information in direct multisearch for multiobjective optimization
Published in Optimization Methods and Software, 2022
R. Andreani, A. L. Custódio, M. Raydan
The purpose of the current work is twofold. Our first objective is to compare the performance of the derivative-free DMS method and the derivative-based MOSQP algorithm. In single-objective optimization, it is common to say that if derivatives are available or can be obtained at a reasonable cost (e.g. using finite-differences), then derivative-based optimization is preferable to derivative-free optimization methods [4, p. 6]. We will provide numerical results on a large set of benchmark MOO problems that allow to assess the numerical performance of derivative-based and derivative-free optimization solvers, when computing approximations to the complete Pareto fronts of derivative-based MOO problems. Our second objective is to assess the potential enrichment of adding first-order information, when derivatives are available, to the DMS framework. We will describe and analyze several different combined techniques that maintain the search/poll paradigm of DMS, while adding in a convenient way gradient information to the poll step. Again, the value of the proposed strategies will be assessed through numerical experiments.
Optimization by moving ridge functions: derivative-free optimization for computationally intensive functions
Published in Engineering Optimization, 2022
James C. Gross, Geoffrey T. Parks
Derivative-free optimization (DFO) methods seek to solve optimization problems using only function evaluations—that is, without the use of derivative information. These methods are particularly suited for cases where the objective function is a ‘black box’ or computationally intensive (Conn, Scheinberg, and Vicente 2009). In these cases, computing gradients analytically or through algorithmic differentiation may be infeasible and approximating gradients using finite differences may be intractable. Common applications of DFO methods include engineering design optimization (Kipouros et al.2008), hyper-parameter optimization in machine learning (Ghanbari and Scheinberg 2017) and more (Levina et al.2009). Derivative-free trust region (DFTR) methods are an important class of DFO that iteratively create and optimize a local surrogate model of the objective in a small region of the function domain, called the trust region. Unlike standard trust region methods, DFTR methods use interpolation or regression to construct a surrogate model, thereby avoiding the use of derivative information.
GPU parameter tuning for tall and skinny dense linear least squares problems
Published in Optimization Methods and Software, 2020
Benjamin Sauk, Nikolaos Ploskas, Nikolaos Sahinidis
Numerical solvers on graphics processing units (GPUs) are typically designed for one particular GPU architecture [46], and may be suboptimal or even unusable on another one [40]. Programmers have circumvented this problem by introducing tuneable parameters into algorithms that can be modified when installing the software on a different GPU architecture than it had been designed for [1]. However, determining tuning parameters that maximize solver performance is a challenging optimization problem because there is no explicit relationship to model the interactions between the software, algorithms, and hardware. Derivative-free optimization (DFO) [59] and simulation optimization (SO) [6] can be used to solve this problem since they do not require explicit functional representations of the objective function or of the constraints. Instead, the LLSP solver can be treated as a black-box system that accepts tuning parameters, and outputs a performance metric such as execution time or floating point operations per second (FLOPs).