Explore chapters and articles related to this topic
A new nonlinear model predictive control algorithm for vehicle path tracking
Published in Johannes Edelmann, Manfred Plöchl, Peter E. Pfeffer, Advanced Vehicle Control AVEC’16, 2017
Backtracking Line Search. After computing the search direction U˜k(j), the NAS algorithm performs a nonlinear line search. The desired step size α is computed by backtracking along U˜k(j) (Nocedal and Wright 2006, Ch. 3). The line search starts from an upper bound αmax, whose generic value is 1. However, αmax ∈ [0,1] can actually be smaller if the line search is blocked by non-active constraints. So αmax is selected as the highest value of α, capped at 1, which maintains feasibility of the iteration (13).
Exact gradient methods with memory
Published in Optimization Methods and Software, 2022
Generally, IGMM and EGMM can be considered to be extensions of GM. Seen from this perspective, the per-iteration complexity of each is the sum between a base complexity that is independent of m (as in a GM iteration) and the model overhead.5 The iterations of GM equipped with backtracking line-search are inexpensive because the point where the gradient is queried does not change during backtracks. Thus, for the aggressive search parameters we have selected, we expect to trigger on average a backtrack at every iteration. This translates to an average of one call to and two calls to at each iteration k. Note that the model stores from iteration k−1, precluding its computation at iteration k. The average iteration k of AGMM calls , and twice each because y changes at each backtrack. Therefore, the base cost of one AGMM iteration is twice that of GM. The complexity of each oracle call in our problems is floating point operations, with the constant being in reality less than 1 on due to the widespread deployment on modern machines of single-instruction-multiple-data (SIMD) capability.
Distributed Newton and Quasi-Newton methods for formation control of autonomous vehicles
Published in Ships and Offshore Structures, 2020
Mansour Karkoub, Huiwei Wang, Tzu-Sung Wu
Backtracking line search starts with some initial step size and then gradually reduces it by a factor σ until the condition is satisfied. For a sufficiently small α, behaves very similar to its first-order Taylor approximation , hence, f can be decreased by . However, for larger values of the step size, which are desired for faster convergence, may lie above the line and, often, too large a value for α may even increase the value of the function. The Armijo rule provides a reasonable trade-off between the two situations by accepting a decrease by factor σ of the one suggested by first-order extrapolation.