Explore chapters and articles related to this topic
Constitutive Law of GB Walls with Damage Effects
Published in Wenguang Li, Biliary Tract and Gallbladder Biomechanical Modelling with Physiological and Clinical Elements, 2021
The lsqnonlin function in MATLAB was chosen to carry out the parameter optimisation by minimising the objective function Eq. (12.3). In the lsqnonlin function, ‘Trust-Region-Reflective’ optimisation algorithm is implanted. In the algorithm, the objective function is approximated with a model function i.e. a quadratic function. Trust region is a subset of the region of the objective function. The minimum objective function is achieved in the trust region. In the trust region algorithm, the search step and size of trust region are decided and updated according to the ratio of the real change of the objective function to the predicted change in the objective function by the model function to ensure sufficient reduction of the objective function. Such procedures can result in the trust region being out of one bound. Thus, the search direction should be reflected to the interior region constrained by the bounds with the law of reflection in optics on that bound. Compared with Newton method and Levenberg-Marquardt algorithm, the Trust-Region-Reflective algorithm can ensure the optimisation iteration remaining in the strict feasible region and its convergence rate is in the second-order (Li 1993).
Seismic Data Regularization and Imaging Based on Compressive Sensing and Sparse Optimization
Published in C.H. Chen, Compressive Sensing of Earth Observations, 2017
Because seismic reconstruction is computationally demanding, fast and robust methods are required to improve the computational efficiency (Trad 2009). However, most of the abovementioned methods belong to line-searching methods; the trust region method, which can provide globally convergent solutions for nonlinear problems, has gotten less attention in sparse optimization (Yuan 1993). For the trust region method, a quadratic approximation of the objective function at the current iteration point is built as a trust region subproblem, and then a descent direction is obtained by solving the subproblem (Yuan 1993). Generally, traditional trust region methods use the L2 norm as the trust region. In this subsection, a novel L1 norm trust region method is proposed, in which a novel L1 norm trust region subproblem (TRSL1) is addressed and a rapid projected gradient method for the subproblem is proposed to improve the computational efficiency significantly. The TRSL1 method is a novel research avenue for seismic reconstruction.
A time domain approach for reconstruction of moving loads acting on bridges from dynamic response data
Published in Hiroshi Yokota, Dan M. Frangopol, Bridge Maintenance, Safety, Management, Life-Cycle Sustainability and Innovations, 2021
A. Firus, J. Schneider, R. Kemmler
Like the classical line search algorithms, the trust-region method also requires a (mostly quadratic) approximation of the objective function. However, while the line search uses it in order to find a search direction and determine a suitable step length along it, the trust-region method defines a region around the current iteration point, in which the approximation is trusted to be a good representation of the original objective function. Subsequently a step is chosen to search the minimizer of the approximated function in this region, whereas both the step and its direction are chosen simultaneously. Figure 3 illustrates the line search and trust-region steps for a function with two variables. A line search algorithm attempts to search the minimum of the approximated function along the line search direction, but this process will reveal a rather small reduction of the objective function in the illustrated case. The trust-region method searches a minimizer of the approximated function within the trusted region. This leads to a stronger reduction of the objective function and thus to a better progress towards the solution. If a particular step within an iteration is unacceptable, the size of the trust region is reduced and a new minimizer has to be found, while generally both the direction and the length of the step change when the size of the trust region is changed. Without going into further details and reproducing the mathematical derivations of the trust-region methods, it is referred to the book of Nocedal2006. It gives a very comprehensive overview of the trust-region methods and proves – among others – that they can achieve super-linear convergence when provided with the exact gradient and Hessian of the objective function. This aspect is addressed in Section 2.4.
Trust-region algorithms for training responses: machine learning methods using indefinite Hessian approximations
Published in Optimization Methods and Software, 2020
Jennifer B. Erway, Joshua Griffin, Roummel F. Marcia, Riadh Omheni
Trust-region methods minimize a function f by modelling changes in the objective function using quadratic models. Each iteration requires approximately solving a trust-region subproblem. Specifically, at the kth iteration, the kth trust-region subproblem is given by where , , and is a given positive trust-region radius. Basic trust-region methods update the current approximate minimizer for f only if the ratio between the actual and predicted change in function value is sufficiently large. If the ratio is sufficiently large, the update is accepted and . When this is not the case, is reduced and the trust-region subproblem is resolved. The solution of the trust-region subproblem is the computational bottleneck of most trust-region methods. The primary advantage of using a trust-region method is that does not have to be a positive-definite matrix; in particular, it may be a limited-memory SR1 matrix.
Evolutionary reliable regional Kriging surrogate for expensive optimization
Published in Engineering Optimization, 2019
Prediction reliability is as important as optimality in expensive optimization. A surrogate from a small number of training samples might result in an inadequate model with lower generality, which leads to a false prediction of optimum or a trap to local optimum. The space surrounding learning samples of an approximate model often provides better prediction accuracy. Therefore, instead of searching the regions far from sample sites, which are liable to larger errors, it is more reliable to constrain the search in the regions surrounding the training samples. A trust region is defined by a trust radius surrounding the updated optimum, which is initialized at a user-defined value from an arbitrary point and iteratively updated to shrink or to expand the region depending on the prediction accuracy (Alexandrov et al. 1998). The trust region approach shows excellent convergence to a local optimum (Abdel-Malek, Ebid, and Mohamed 2017; Forrester and Keane 2009).
Surrogate-based model parameter optimization based on gas explosion experimental data
Published in Engineering Optimization, 2019
Anna-Lena Both, Helene Hisken, Jan-J. Rückmann, Trond Steihaug
The surrogate-based optimization problem is solved by a trust region optimization algorithm combined with a multi-start strategy for global optimization. The trust region algorithm is provided by a routine called lsqnonlin from MATLAB® (2015b, The MathWorks Inc.), which is tailored for nonlinear least-squares problems with box-constraints. It is based on the interior trust region algorithm for general nonlinear objective functions subject to bounds proposed by Coleman and Li (1996). The algorithm involves solving a quadratic trust region subproblem with the restriction that the new iterate lies within the interior of the feasible set. In the MATLAB routine, an approximate solution to the subproblem is found by first solving the subproblem without the box-constraints restriction using a two-dimensional subspace method and then truncating the solution into the interior of the feasible set. In the two-dimensional subspace method, the special structure of a least-squares objective is exploited, see MATLAB user's manual (The MathWorks Inc 2017). Convergence of the trust region algorithm can be stated when employing a regularized form of the objective function of problem (7).