Explore chapters and articles related to this topic
Computational Complexity
Published in Craig A. Tovey, Linear Optimization and Duality, 2020
Computational complexity theory has relatively little to say about continuous nonlinear problems. In practice, convexity is usually the dividing line between easy and hard (global) optimization, though non-smoothness can make things difficult as well. The analog to determining whether the problem is easy or NP-hard, is to determine whether or not the problem has multiple local optima. The answer roughly dictates what kind of solution method to use or solution quality to expect. However, when NP-completeness theory is applied to this domain, problems thought of as simple in practice are theoretically difficult, e.g., determining whether a point is a local minimum of a quadratic function. Thus traditional NP-completeness theory has not been particularly useful.
Mixed Type Duality and Saddle Point Criteria for Multiobjective Programming Problems with Nonsmooth Generalized Invexity
Published in Anurag Jayswal, Tadeusz Antczak, Continuous Optimization and Variational Inequalities, 2023
In the recent years, many authors presented the necessary optimality conditions and employ the conditions with some additional assumption to construct the sufficient optimality conditions, duality theorems as well as the saddle point theorems. It is because the sufficient optimality conditions, duality theorems and saddle point theorems are deduced from the inverse effect of necessary optimality condition with some extra assumptions. As we know, convexity plays an important role in mathematical programming problems, some of which are sufficient optimality conditions, duality theorems or saddle point theorems. The sufficiency, duality theorems and saddle point theorems are being established by extending the concept of convexity. One of the most generalizations of convexity of differentiable function in optimality theory was introduced by Hanson [17]. The name of invexity – an invariant invexity – was applied in mathematical programming, for instance, [1–3, 5–8, 21–25, 30–35, 41, 45] and others. Fundamental results from optimization theory for various classes of differentiable multiobjective (fractional) programming problems have been presented in the literature. In [42], Rockafellar has pointed out that in many practical applications of applied mathematics the functions involved are not necessarily differentiable. Shortly after, Clarke [15] defined generalized derivative and subdifferential on locally Lispschitz functions, many practical problems are described under nonsmooth functions. For example, Reiland [41] used the generalized gradient of Clarke [15] to define nondifferentiable invexity for Lipschitz real valued functions. Later on, with generalized invex Lipschitz functions, optimality conditions and duality theorems were established in nonsmooth mathematical programming problems.
Mathematical Preliminaries
Published in Craig Friedman, Sven Sandow, Utility-Based Learning from Data, 2016
The practical importance and usefulness of convexity of a optimization problem is to a large extent derived from the fact that any local minimum of a convex function on a convex set is also a global minimum. This considerably simplifies the numerical procedures that can be employed to solve these optimization problems.
On Minty variational principle for nonsmooth multiobjective optimization problems on Hadamard manifolds
Published in Optimization, 2022
Balendu Bhooshan Upadhyay, Savin Treanţă, Priyanka Mishra
Convexity plays a central role in optimization theory and related areas. However, the notion of convexity is very restrictive and does not fit in modelling and solution of several real-world applications, for instance, mathematical economics. In order to generalize the convexity notion, Luc et al. [16] introduced a new class of generalized convex functions, namely ϵ-convex functions. Using ϵ-convexity as a tool, Ngai et al. [17] introduced the class of approximately convex functions. The class of approximately convex functions includes the classes of convex functions, weakly convex functions, strictly convex functions and strongly convex functions of order and is stable under the finite sum and finite supremum. Daniilidis and Georgiev [18] established that a locally Lipschitz function is approximately convex if and only if its Clarke's subdifferential is a submonotone operator. Ngai and Penot [17] derived several characterizations for approximately convex functions in terms of generalized subdifferential. Amini-Harandi and Farajzadeh [19] extended and refined the results of Daniilidis and Georgiev [18] from Banach spaces to locally convex spaces.
Abstract convexity of set-valued topical functions with application in DC-type optimization
Published in Applicable Analysis, 2021
Chaoli Yao, Chunlei Tang, Jiawei Chen
Convexity is a crucial structure for optimization problems. Concepts and theories in convex analysis have substantially influenced the development of optimization as well as its applications in various ways. Even though these theories perform well when dealing with convex problems, they are usually invalid for nonconvex ones. In order to apply these theories to a broader area, a variety of generalizations for the concept of convexity were proposed and studied. The abstract convexity is a typical one, which can be seen as a nonlinear extension of the classical convexity. It covers numerous kinds of nonconvex functions, which opens a way to exploit the powerful concepts and ideas in convex analysis to deal with nonconvex problems. Nowadays, theories of different kinds of abstract convex functions and their applications in various fields are abundant, see [1, 2]. However, due to the difficulty of coping with the concept of supremum in a vector space, the literature dedicated to the research on these aspects in vector case is very little. As concepts and related theories about the supremum in vector space have already been introduced and studied in plenty of the literature, the extension of abstract convexity to vector and set-valued situations may be possible and worth trying.
ADMM for multiaffine constrained optimization
Published in Optimization Methods and Software, 2020
Wenbo Gao, Donald Goldfarb, Frank E. Curtis
The strengthened convexity condition is stronger than convexity, but weaker than strong convexity. An example is given by higher order polynomials of degree d composed with the Euclidean norm, which are not strongly convex for d>2. An example is the function , which is not strongly convex but does satisfy the strengthened convexity condition. This function appears in the context of the cubic-regularized Newton method [45]. We note that ADMM can be applied to solve the nonconvex cubic-regularized Newton subproblem by performing the splitting for and .