Explore chapters and articles related to this topic
Nonlinear Programming
Published in Albert G. Holzman, Mathematical Programming, 2020
Hestenes [118] and Powell [119] have independently suggested a penalty function type method for the solution of this program. Their method, called the augmented Lagrangian method, or method of multipliers, has some nice properties which make it more attractive for computations than the original penalty function technique. Define the augmented Lagrangian M(x, μ) by M(x,μ)=f(x)−Σj=1pμjhj(x)+12cΣj=1p(hj(x))2(426)=L(x,μ)+12cΣj=1p(hj(x))2(427)where c is a positive number. Note that here the quadratic penalty term is added to the Lagrangian L corresponding to (PE) and not to the objective function f, as in the penalty function method. We then have Theorem 34.
Utilising physics-informed neural networks for optimisation of diffusion coefficients in pseudo-binary diffusion couples
Published in Philosophical Magazine, 2023
Hemanth Kumar, Neelamegan Esakkiraja, Anuj Dash, Aloke Paul, Saswata Bhattacharyya
Various methods have been developed to solve the PDE-constrained optimisation problem [36] such as the penalty method [37], the augmented Lagrangian method [38], the primal and primal-dual methods [39] adjoint state method [40]. Among these, penalty and augmented Lagrangian methods are most often employed. In the penalty method, the constrained optimisation problem is reduced to a sequence of unconstrained problems with varying penalty coefficients. The convergence and exactness of the solution is sensitive to the choice of penalty coefficients. If the penalty coefficients are large, the method suffers from slower convergence due to ill-conditioning of the problem. Whereas if the coefficients are small the loss function fluctuates around local minima. In the augmented Lagrangian method, Lagrangian multipliers are introduced in addition to penalty coefficients thereby improving the rate of convergence compared to penalty method and avoiding ill-conditioning. With the development of neural networks with physics-constrained learning [27,41], one can mitigate such numerical challenges.
Seismic analysis of damaged concrete gravity dams subjected to mainshock-aftershock sequences
Published in European Journal of Environmental and Civil Engineering, 2022
Mohammad Hossein Sadeghi, Javad Moradloo
To solve original constrained problems, there is a certain class of algorithm named Augmented Lagrangian Method which is a combination of penalty and Lagrangian methods (Belytschko et al., 2000). It is basically a Lagrangian method with an unconstrained objective and additional penalty term. Unlike Lagrangian method, we can discuss sliding frictional contact problems in this algorithm. The contact problems require minimisation of parameter Π so that the following expression should prone to zero: