Explore chapters and articles related to this topic
Fast Dual Optimization for Medical Image Segmentation
Published in Ayman El-Baz, Jasjit S. Suri, Big Data in Multimodal Medical Imaging, 2019
Jing Yuan, Ismail Ben Ayed, Aaron Fenster
Yuan et al. [32] showed that, with similar variational analysis as in [7,20], the convex optimization model (Expression 6.54) is equivalent to the dual formulation below maxps,pt,p,λ∫Ωps(x)dxs.t.p(x)≤α,ps(x)≤Cs(x),pt(x)≤Ct(x);(double-struckDiv(p−λe)−ps+pt)(x)=0,λ(x)≤0.
Variational analysis and applications, Springer monographs in mathematics
Published in Optimization, 2019
This recent book on Variational Analysis and Applications is an excellent and clearly written monograph on variational analysis which provides an easy access to the most fundamental parts of this relatively young field of mathematics. Variational analysis can be viewed as an outgrowth of the classical calculus of variations, constrained optimization, and optimal control which goes back to the 18th century. On the other hand, modern variational principles and techniques are largely based on perturbations, approximations, and the usage of generalized differentiation. All of this requires the developing new forms of analysis which strongly combine and unify analytic and geometric ideas.
New formulas for subdifferentials of perturbed distance functions
Published in Optimization, 2023
Let X be a real Banach space endowed with a norm , be a nonempty closed subset, and be a lower semicontinuous function. For each , consider the following perturbed optimization problem, which is denoted by , The perturbed optimization problem of this type was first presented and investigated by Baranger [1]. Since then, it has been studied extensively and applied to optimal control problems governed by partial differential equations; see , for example, [2–5]. Generic results on the solution existence and/or well-posedness of perturbed optimization problems have been established in [6–14]. Apart from these studies, there is another main research stream, which focuses on differential stability of the optimal value function of . The function is called perturbed distance function, denoted by , and given by In the case when , the perturbed distance function is reduced to the well-known distance function defined by for each . The latter is a backbone in the theory of optimization and variational analysis (see, e.g. the papers [15–19] for original results and the books [20–23] for connections among different theories). It also plays an important role in the analysis of PDEs of Monge–Kantorovich type arising from problems in optimal transportation theory and shape optimization [24,25]. In the other case when S = X, the exact penalization [26,Theorem 2.5], which plays a key role in algorithms for convex composite optimization [27] as a natural extension of the so-called big-M method of linear programming to nonlinear constrained programming can be seen as a perturbed distance function. In a similar manner, the bounded approximants for monotone operators using sequences of infimal convolutions of a function with (instead of as in Moreau–Yosida approximation method, and thus can work in non-reflexive Banach spaces) proposed by Fitzpatrick and Phelps [28] are other examples of functions in the form of (2) with S = X.