Explore chapters and articles related to this topic
Introduction to Optimization and Relevance of Soft Computing towards Optimal Solution
Published in Kaushik Kumar, Supriyo Roy, J. Paulo Davim, Soft Computing Techniques for Engineering Optimization, 2019
Kaushik Kumar, Supriyo Roy, J. Paulo Davim
Steepest Descent Method (popularly known as gradient method) is an old technique used for minimizing any given function defined on a multi-dimensional input space. This method forms the basis for many direct methods used in optimizing both constrained and unconstrained problems. Despite its sluggish convergence capability, it is considered maximum regularly utilizing non-linear optimization technique because of its ease in applying several implicational areas. The well-known formula for evaluating the steepest descent formula is θnext=θnew−ηg with some positive step size η and gradient direction g.
Introduction to mine systems
Published in Amit Kumar Gorai, Snehamoy Chatterjee, Optimization Techniques and their Applications to Mine Systems, 2023
Amit Kumar Gorai, Snehamoy Chatterjee
In the gradient method, the derivative information of the optimization function is used to locate the solution. The first derivatives of the function offer slopes of the function that become zero at the optimal point. The steepest slope or gradient of the function tells the optimal solution. For a one-dimensional problem, Newton’s method (Avriel, 2003) can be used to find the optimum solution of the function.
Space mapping techniques for the optimal inflow control of transmission lines
Published in Optimization Methods and Software, 2018
The gradient method is a common approach to solve nonlinear optimization problems. Starting with an initial solution, in each iteration the gradient of the objective function with respect to the control u is being computed. The current iterate is being updated in the direction of the negative gradient until the gradient is close to zero and thus a (local) optimum has most likely been found. Since the gradient has to be computed repeatedly, applying e.g. a central difference quotient is very time consuming as several fine model forward simulations have to be performed. A more efficient way to compute the gradient is the adjoint method [30,34, 48].
New gradient methods with adaptive stepsizes by approximate models
Published in Optimization, 2023
Zexian Liu, Hongwei Liu, Ting Wang
The gradient method is the simplest and oldest iterative method, which uses the negative gradient as the search direction. Different stepsizes result in different gradient methods, and the stepsize is extremely crucial to the theory and numerical performance of the gradient method. The gradient method can be dated back to the steepest descent (SD) method proposed by Cauchy [1]. The SD method usually converges slowly and is often interfered by the zigzag behaviour [2]. A surprising result about the gradient method (called BB method) was given by Barzilai and Borwein [3] in 1988, where the famous stepsizes can lead to superlinear convergence for two-dimensional strictly convex quadratic functions. Since then, the gradient methods have received extensive concern due to the nice numerical performance. The BB method was proved to be globally [4] and R-linearly [5, 6] convergent for any dimensional strictly convex quadratic functions. Yuan [7] derived a new stepsize (called Yuan's stepsize), which can make the gradient method enjoy two-dimensional termination property. Based on Yuan's stepsize and the SD stepsize, Dai and Yuan [8] presented the so-called Dai-Yuan (monotone) gradient method, which performs even better than the (nonmonotone) BB method. In addition, Dai and Yuan [9] also proposed the alternate minimization gradient method (AM) whose stepsize is determined by minimizing alternately the function value and the gradient norm along the line of the negative gradient direction. Zhou et al. [10] proposed an adaptive BB method (ABB), where the stepsize is chosen adaptively between the two well-known BB stepsizes. Another efficient gradient method is the SDC method [11] which takes some SD iterates followed by s steps with the same Yuan's stepsize. By incorporating the retard technique, Frassoldati et al. [12] presented two efficient gradient methods (called ABBmin1 and ABBmin2 method) which use alternately the BB stepsize and other stepsizes. Liu et al. [13] introduced a new type of stepsize called approximately optimal stepsize, which is generated by minimizing the approximate model of the objective function along the line of the negative gradient direction. Very recently, Huang et al. [14] developed a novel stepsize, which makes the corresponding BB method enjoy two-dimensional termination property. In 1997, Raydan [15] extended the BB method to general unconstrained optimization by incorporating a nonmonotone line search [16]. Subsequently, a lot of modified BB methods [17–19] were proposed. More advance about gradient methods can be referee [20–26].