Explore chapters and articles related to this topic
Classical Optimization Methods
Published in A Vasuki, Nature-Inspired Optimization Algorithms, 2020
These functions could be of different types such as continuous or discrete, differentiable or non-differentiable, deterministic or non-deterministic with some random parameters inculcated into them. The traditional algorithms are usually applicable if the function is continuous and differentiable. When the function is constrained, it is a constrained programming problem whereas if it is unconstrained, it is an unconstrained programming problem. If there is only a single objective to be optimized, it is single-objective optimization, whereas if there are multiple objectives, it is multi-objective optimization. The function could be concave or convex as determined by the contours of the surface plot of the function. Concave functions have one maximum whereas convex functions have one minimum. A problem is said to be feasible if there exists at least one set of variables that satisfy the objective function and the constraints. A problem is infeasible if it is not possible to find at least one set of design variables that satisfy the objective(s) and the constraints. If the problem is infeasible the solution set will be empty. For an unbounded problem there is no optimal solution since it is always possible to find a solution that is better than the existing solution to the problem.
Functions of Vectors
Published in Hannah Robbins, Functional Linear Algebra, 2021
which we can write as the equations ax1+bx2=b1 and cx1+dx2=b2. For familiarity's sake, let's use x for x1 and y for x2, so our equations are ax+by=b1 and cx+dy=b2. Geometrically, this means we're looking at two lines in the plane. A solution to Ax→=b→ is a pair of values for x and y which satisfy both of these equations, which geometrically means a point in the plane (x,y) which lies on both lines. This means our solution set is the set of intersection points of our two lines.
Matrices
Published in Lina Oliveira, Linear Algebra, 2022
Solving a system of linear equations consists in determining the set of all n-tuples (x1,x2,…,xn) of n scalars which satisfy all the equations in the system. This set is called the solution set or the general solution of the system.
Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
Published in Optimization, 2021
The traditional Heron problem is to find a point on a straight line in a plane in which it minimizes the sum of distances from it to two given points. Several generalizations of the classical Heron problem of finding a point that minimizes the sum of the distances to given closed convex sets over a nonempty simple closed convex set have been investigated by many authors, for instance, Boţ et al. [31], Mordukhovich et al. [32,33]. However, it is very challenging to solve the generalized Heron problem when the constrained set is an affine subspace (a solution set to a system of linear equations), which typically has no solution and thus computing a metric projection onto this feasible set is impossible. In addition, any methods mentioned in the above references cannot be applied in such case. This motivates us to consider the generalized Heron problem of finding a point that minimizes the sum of the distances to given closed convex sets over a least squares solution to a system of linear equation, that is, where are nonempty closed convex subsets, for all , is a matrix, and is a vector. We observe that (12) fits into the setting of the problem (1) when setting , , for all , and for all . In this case, a solution set of a system of linear equations can be empty. Moreover, it is worth noting that when performing our proposed method (Algorithm 1), we only require to compute the gradient of g but not the inverse of any matrix. Note also that, the square of -norm is to guarantee the uniqueness of a solution to the problem.