Explore chapters and articles related to this topic
Simulation of Crystalline Nanoporous Materials and the Computation of Adsorption/Diffusion Properties
Published in T. Grant Glover, Bin Mu, Gas Adsorption in Metal-Organic Frameworks, 2018
The Newton–Raphson method uses not only the first derivative but also the Hessian matrix. At the cost of more memory and computation, the algorithm becomes more reliable and has an accelerated convergence (fewer steps are needed). In addition, methods that use second derivatives lead to first derivatives that can be lowered arbitrarily close to zero. The Newton–Raphson algorithm tends to find stationary points that are of the same curvature, that is, the same number of negative eigenvalues of the Hessian matrix, as the Hessian at the starting point. Eigenmode-following solves this limitation by shifting some of the eigenvalues to change their sign and achieve the desired curvature. Importantly, out of the discussed methods, only eigenmode-following guarantees that a true (local) minimum is found. Other methods should employ a check afterwards to determine whether the stationary point is a true minimum.
Image Descriptors and Features
Published in Manas Kamal Bhuyan, Computer Vision and Image Processing, 2019
A Hessian corner detector Hessian corner detector is very similar to Harris corner detector. This detector considers a rotationally invariant measure, which is calculated using the determinant of Hessian matrix. Second-order Taylor series expansion is used to derive the Hessian matrix, and it can be used to describe the local image structures around a point in the image.
Model updating by minimizing errors in modal parameters
Published in Heung-Fai Lam, Jia-Hua Yang, Vibration Testing and Applications in System Identification of Civil Engineering Structures, 2023
Both the gradient vector and Hessian matrix can be approximated by the Taylor series expansion. A two-variable function J(θ1, θ2) in Figure 6.35 is used as an example.
Travel time analysis and dimension optimisation design of double-ended compact storage system
Published in International Journal of Production Research, 2023
Qing Yan, Jiansha Lu, Hongtao Tang, Yan Zhan, Xuemei Zhang, Yingde Li
Since V is a constant, it can be transformed into the following equivalent optimisation problem (EOP): The above optimisation problem is usually regarded as a convex programming problem. To prove that the objective function is a convex function, we firstly define fussy , and then we test the Hessian matrix of the function. If the Hessian matrix of the function is positive definite, is a convex function. The Hessian matrix of can be expressed as:
An improved interval limit equilibrium method based on particle swarm optimisation: taking Dahua landslide as a case
Published in European Journal of Environmental and Civil Engineering, 2023
Ling Yang, Huanling Wang, Hongjie Chen, Yu Ning
Optimisation methods are often used to figure out the minimum and maximum of a constrained function. The interval function can be treated as a constrained one with certain ranges of variables. In traditional optimisation methods, gradient descent method has a problem with deciding its step size (Petrović, 2015). Newton method requires second derivatives of the objective function and the inverse matrix of the Hessian matrix, which need very complicated calculation (Mokhtari et al., 2017; Bear & Cheng, 2010). Also, in these methods, the search for a global minimum and maximum in the presence of many local minima and maxima mainly depends on the initial value which is better to be close to the true solution (Bear & Cheng, 2010). Then a number of non-traditional methods based on biological evolution principles were developed, like genetic method, simulated annealing method and particle swarm optimisation (PSO). The search of these methods depends on the past experience rather than the initial value, commendably overcoming the drawbacks of the traditional methods.
On prediction of slope failure time with the inverse velocity method
Published in Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, 2023
Jie Zhang, Hong-zeng Yao, Zi-peng Wang, Ya-dong Xue, Lu-lu Zhang
The Hessian matrix is a square matrix composed of the second-order partial derivatives of the objective function, describing the local curvature of the function (Brown 2014). In linear algebra, for a matrix A, the condition number κ(A) can be calculated as follows: where λmax(A) and λmin(A) represent the maximum and minimum eigenvalues of the matrix A, respectively. When κ(A) is relatively small, the matrix A is called well-conditioned matrix, otherwise it is called ill-conditioned matrix (e.g. Ciarlet 1989). According to Equation (8), the condition number of the Hessian matrix measures how much the curvature differs in each direction. A relatively large condition number implies λmax being far larger than λmin, indicating that the curvature of the objective function in some directions is far greater than that in other directions. In such a case, there exists a relatively flat region as the function changes much more slowly along some directions than it does along other directions (Pascanu et al. 2014). Once the search step of optimisation falls into the flat region, the iteration may cease because the large condition number can cause the loss of information through rounding errors so that the value of the objective function is regarded as a constant in the flat region. If the flat region is large, the search step from the initial point may easily fall into the flat region, making the termination point of the optimisation algorithm sensitive to the initial value.