Explore chapters and articles related to this topic
Non destructive evaluation and inverse methods
Published in Bernardo Caicedo, Geotechnics of Roads: Fundamentals, 2018
Iterative methods used for back calculation analyses include zero order methods such as simplex, genetic algorithms such as the steepest descent method and the gradient descent method, the Hessian matrix method, and the self-adjoint state method. All these methods have the following procedure: Define an objective function based on the error between field data and computed data, then define a target error.Chose a set of elastic constants for each layer.Update the set of elastic constants.Repeat the process to minimize the error.
A symmetric grouped and ordered multi-secant Quasi-Newton update formula
Published in Optimization Methods and Software, 2022
Nicolas Boutet, Joris Degroote, Rob Haelterman
From an optimization problem, we derive the following root-finding problem: where is the objective function of the optimization problem: We assume the problem has the following characteristics: The black box makes it possible to calculate the objective function .Applying Newton's method is however not possible as, due to the black box, the analytic form of is unknown.In order to apply Quasi-Newton methods, the gradient of the problem can be estimated (for example, thanks to the adjoint state method [13,19,34]).
Combining adaptive dictionary learning with nonlocal similarity for full-waveform inversion
Published in Inverse Problems in Science and Engineering, 2021
Hongsun Fu, Hongyu Qi, Ran Hua
When a local optimization technique (such as the conjugate gradient method, steepest-descent method or quasi-Newton method) is used to minimize the misfit function in (1), the gradient of Ψ is required. In this paper, we adopt the L-BFGS algorithm [31], a popular quasi-Newton strategy, which can achieve a much faster convergence speed at minimal computational cost. More specifically, the L-BFGS algorithm iteratively updates the model by setting where is the misfit gradient, it can be calculated by the adjoint-state method [32,33], which specific steps for complex-valued fields can refer to Barucq et al. ([34], Appendix A). Furthermore, the L-BFGS method does not need the complete storage of (an approximation of the inverse Hessian), which is prohibitive for large-scale model. The step length is chosen by a line search that satisfies the Wolfe conditions [31,35].
New approach to identify the initial condition in degenerate hyperbolic equation
Published in Inverse Problems in Science and Engineering, 2019
K. Atifi, EL-H. Essoufi, B. Khouiti
Firstly, we present a new theorem which gives the existence and uniqueness of the weak solutions of the problems (1.4), for each . Secondly, we prove that: when , the sequence of weak solutions of the problem (1.4) converges to the weak solution of the problem (1.1). Thirdly, with the aim of showing that the minimization problem and the direct problem are well-posed, we prove that the solution’s behaviour changes continuously with the initial conditions, for this we prove the continuity of the function , where is the weak solution of (1.4) with initial state . Finally, we prove the differentiability of the functional J, which gives the existence of the gradient of J, that is computed using the adjoint state method. Also we present some numerical experiments to show the performance of this approach.