Explore chapters and articles related to this topic
A new nonlinear model predictive control algorithm for vehicle path tracking
Published in Johannes Edelmann, Manfred Plöchl, Peter E. Pfeffer, Advanced Vehicle Control AVEC’16, 2017
Backtracking Line Search. After computing the search direction U˜k(j), the NAS algorithm performs a nonlinear line search. The desired step size α is computed by backtracking along U˜k(j) (Nocedal and Wright 2006, Ch. 3). The line search starts from an upper bound αmax, whose generic value is 1. However, αmax ∈ [0,1] can actually be smaller if the line search is blocked by non-active constraints. So αmax is selected as the highest value of α, capped at 1, which maintains feasibility of the iteration (13).
Nonlinear Optimization Methods
Published in Larry W. Mays, Optimal Control of Hydrosystems, 1997
In general, all search algorithms for unconstrained minimization consist of two basic steps. The first step is to determine the search direction along which the objective function value decreases. The second step is called a line search (or one-dimensional search) to obtained the optimum solution point along the search direction determined by the first step. Mathematically, minimization for the line search can be stated as
Nonlinear structural analysis
Published in Paulo B. Lourenço, Angelo Gaetani, Finite Element Analysis for Building Assessment, 2022
Paulo B. Lourenço, Angelo Gaetani
Regarding the latter, the line search algorithm is a widely used technique to increase both the convergence rate and the convergence likelihood. In mathematical terms, the method makes the solution procedure globally convergent, i.e. convergent to some solution independently of the starting point. The method is schematized in Eq. (2.70). Compared with the last of Eq. (2.52), the formula means that the method scales the increment of displacement through a factor α. The goal of this factor is to force the current iteration to be a better estimation than the previous iteration. To have a visual intuition of the method, let us consider the equilibrium from energetic point of view. Basically, equilibrium is satisfied when the work of residuals on displacement increments Ω reaches the null value, which is also the global minimum if Ω is considered with its absolute value. With reference to Eq. (2.70), as all data are known at the iteration i – 1, the work Ωi−1 is also known: the objective is to find a value of α for which the work in correspondence of dn,i is lower than Ωi−1. In this case, α works as a convergence enhancer (or catalyst, in the chemical lexicon). If the work of residuals decreases at each iteration, i.e. Ωi < Ωi−1, the solution method converges to its minimum. dn,i=dn,i−1+αδdn,i
A class of projected-search methods for bound-constrained optimization
Published in Optimization Methods and Software, 2023
Michael W. Ferry, Philip E. Gill, Elizabeth Wong, Minxin Zhang
Many methods for unconstrained minimization generate a sequence of iterates such that is chosen to give a decrease in f that is at least as good as a fixed fraction of the decrease in the local affine model . If is computed as , where is a descent direction for f at and is a positive scalar, then the decrease condition may be written as which is known as the Armijo condition (see, e.g. Armijo [1], Ortega and Rheinboldt [28]). Most Armijo line searches are implemented as a simple backtracking procedure in which an initial step is reduced by a constant factor until the Armijo condition (1) is satisfied. Alternatively, backtracking may be used in conjunction with a simple quadratic interpolation scheme using , and at each trial α (see Dennis and Schnabel [9]).
An inertial subgradient extragradient algorithm with adaptive stepsizes for variational inequality problems
Published in Optimization Methods and Software, 2022
Xiaokai Chang, Sanyang Liu, Zhao Deng, Suoping Li
To obtain proper step sizes, linesearch and outer approximation techniques were involved in [22–24,34,37], when the operator F is not Lipschitz continuous or the Lipschitz constant is unknown. Based on the local information of the operator and some linesearch procedures, Malitsky [29] introduced an efficient projected reflected gradient method. Generally, the linesearch can be quite costly as it requires computing additional values of F or , or even both in every linesearch iteration. In [16,21], Hieu et.al introduced extragradient-like algorithms with or without an inertial step. However, they adopted nonincreasing step sizes by following Yang and Liu [40], this might be unfavourable if the algorithm starts in a region with a big curvature of F.
Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
Published in Optimization Methods and Software, 2020
The gradient method dates back to Cauchy [6]. Armijo [1] was the first to establish convergence to stationary points of smooth functions using an inexact line search with a simple ‘sufficient decrease’ condition. Wolfe [26], discussing line search methods for more general classes of methods, introduced a ‘directional derivative increase’ condition among several others. The Armijo condition ensures that the line search step is not too large while the Wolfe condition ensures that it is not too small. Powell [22] seems to have been the first to point out that combining the two conditions leads to a convenient bracketing line search, noting also in another paper [23] that use of the Wolfe condition ensures that, for quasi-Newton methods, the updated Hessian approximation is positive definite. Hiriart-Urruty and Lemaréchal [12, Vol. 1, Ch. 11.3] give an excellent discussion of all these issues, although they reference neither [1] nor [22,23]. They also comment (p. 402) on a surprising error in [6].