Explore chapters and articles related to this topic
Other Important Optimization Algorithms
Published in Krishn Kumar Mishra, Nature-Inspired Algorithms, 2023
One popular method for solving optimization problems is to compute the function’s derivative and set it to zero. The value at which the derivative becomes zero may produce the optimal solution value. The function’s second derivative is calculated once more, and the optimal solution is identified based on the second derivative value. Because a computer cannot handle continuous values and only works with discrete input values, the derivative of a function on each input value is calculated using the gradient. In the gradient-based method, the gradient values of selected solutions are evaluated, and then directions toward a global optimal solution are identified. Generally, the algorithm designed for implementing a gradient-based approach is point-to-point-based. From a single solution, the search for the optimal global solution begins. A method for converting an existing value to a new value. This is accomplished by inserting a random value into the preceding value. The algorithm’s added value is referred to as step size. Following the creation of a new solution, both solutions are compared, and a new search direction and step size are calculated. The method used to increase or decrease the step size and predict the search direction defines the algorithm type. For solving optimization problems, a variety of gradient-based methods are available. The steepest descent method, conjugate gradient method, nonlinear conjugate gradient method, Newton method, and quasi-Newton method are some algorithms. Although many variants use gradient-based techniques to guide the search, new derivation-free methods were developed because gradient- based methods are expensive.
Robust Low-Rank Tensor Decomposition with the L2 Criterion
Published in Technometrics, 2023
Qiang Heng, Eric C. Chi, Yufeng Liu
Historically, the alternating least squares (ALS) algorithm (Carroll and Chang 1970; Harshman 1970) has been the “work-horse” of solving the above CP decomposition problem, which updates one of the factor matrices while holding the others fixed. Acar, Dunlavy, and Kolda (2011), however, showed that a direct optimization approach can obtain more accurate estimates of the low-rank tensor, especially when the specified rank is greater than the true rank. By direct optimization, we mean that the gradients with respect to the factor matrices are computed and all the factor matrices are updated “all-at-once” or simultaneously with a local optimization method like nonlinear conjugate gradient method (NCG) or Limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm (L-BFGS). We refer to this direct optimization approach as CP-OPT.
Laplace-domain waveform inversion using the l-BFGS method
Published in Geosystem Engineering, 2019
Instead, there are several approximation methods to the Newton method that can be used for large-scale inverse problems. Hu, Abubakar, Habashy, and Liu (2011) used the nonlinear conjugate gradient method, which updates the model parameter using the previous search direction and the current gradient direction. The steepest descent method can also be used in waveform inversion provided with an efficient preconditioner, such as the diagonal elements of the approximated Hessian (Operto, Virieux, Dessa, & Pascal, 2006) or the pseudo-Hessian (Shin, Jang, & Min, 2001). Quasi-Newton methods, such as the limited-memory Broyden–Fletcher–Goldfarb–Shanno (l-BFGS) method, efficiently approximate the inverse of the Hessian (Brossier, Operto, & Virieux, 2009; Nocedal & Wright, 2006). The Gauss–Newton conjugate gradient method (Pyun, Son, & Shin, 2011b) or the Newton conjugate gradient method (Métivier, Bretaudeau, Brossier, Operto, & Virieux, 2014) implicitly calculates the Hessian using conjugate gradient iterations. Métivier and Brossier (2016) compared several optimization methods for waveform inversion in the frequency domain and showed that the l-BFGS outperformed the steepest descent method and the nonlinear conjugate gradient method in both efficiency and correctness. The truncated Newton method yielded better results than the l-BFGS method; however, the former required significantly more computations than the latter.
Nonlinear conjugate gradient method for identifying Young's modulus of the elasticity imaging inverse problem
Published in Inverse Problems in Science and Engineering, 2021
Talaat Abdelhamid, Rongliang Chen, Md. Mahbub Alam
In this section, we describe the nonlinear conjugate gradient method for solving the numerical optimization problem. Each iteration requires solving forward problem for finding the minimizer. The nonlinear conjugate gradient method stops when the approximate solution is accurate enough or satisfies the discrepancy principle for stopping criterion. The main steps of the proposed algorithm are given below.