Explore chapters and articles related to this topic
Nonlinear Optimization
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
There is more than one way to proceed. Some methods sample the function at a number of points and use the values of F at these points to construct a simpler and smoother model of the function. Often the algorithm uses a convex function for its model. But the most common direct search methods either use some strategy for selecting new search directions and then explore in those directions, or else maintain a list of points defining a region which (it is hoped) contains the minimum. We give examples of the former type of algorithm in this section and the latter type in the next section. Theoretical developments of these types of algorithms generally assume that the function is unimodal in the region of interest, but this is not a requirement for their application in practice.
Advances in Inverse Planning Algorithm and Strategy
Published in Siyong Kim, John Wong, Advanced and Emerging Technologies in Radiation Oncology Physics, 2018
Masoud Zarepisheh, Baris Ungun, Ruijiang Li, Yinyu Ye, Stephen Boyd, Lei Xing
For computational efficiency, a convex function is usually used as the objective function in the optimization. If the objective function φ1(x) is defined to be the square of the l2-norm of the difference between the delivered dose and the target dose, the treatment planning problem can be expressed as: () minimizeφ1(x)=∑kwk(Dkx−pk)2subject to:x≥0
Random Variables, Distributions & Linear Regression
Published in Nailong Zhang, A Tour of Data Science, 2020
We see that the second-order derivative is positive semidefinite which implies the SSR in OLS is a convex function (see Section 3.1.4 in [3]) and for an unconstrained convex optimization problem, the necessary as well as sufficient condition for optimality is that the first-order derivative equals 0 (see Section 4.2.3 in [3]). Optimization of convex function is very important in machine learning. Actually, the parameter estimations of many machine learning models are convex optimization problems.
Travel time analysis and dimension optimisation design of double-ended compact storage system
Published in International Journal of Production Research, 2023
Qing Yan, Jiansha Lu, Hongtao Tang, Yan Zhan, Xuemei Zhang, Yingde Li
Since V is a constant, it can be transformed into the following equivalent optimisation problem (EOP): The above optimisation problem is usually regarded as a convex programming problem. To prove that the objective function is a convex function, we firstly define fussy , and then we test the Hessian matrix of the function. If the Hessian matrix of the function is positive definite, is a convex function. The Hessian matrix of can be expressed as:
Sparse convex optimization toolkit: a mixed-integer framework
Published in Optimization Methods and Software, 2023
Alireza Olama, Eduardo Camponogara, Jan Kronqvist
For general strongly convex functions, is not obtained easily. However, in some practical problems found in statistical learning and control, the computation of is feasible. For example, the objective function in sparse Model Predictive Control (s-MPC) problems (which is a subclass of the SCO problem) is typically a convex quadratic function. For convex quadratic functions, is the smallest eigenvalue of the Hessian matrix. In machine learning problems the objective function usually consists of a convex function and a strongly convex regularization term. In this case, can be computed from the regularization term.
Descent methods with computational errors in Banach spaces
Published in Optimization, 2020
Simeon Reich, Alexander J. Zaslavski
The study of minimization methods for convex functions is a central topic in optimization theory. See, for example, [4–15] and the references mentioned therein. Note, in particular, that the counterexample studied in Section 2.2 of Chapter VIII of [16] shows that, even for two-dimensional problems, the simplest choice for a descent direction, namely the normalized steepest descent direction, may produce sequences the functional values of which fail to converge to the infimum of the objected function f considered in [16], which attains its minimum. This vector field V , defined above and discussed in Section 2.2 of Chapter VIII of [16], belongs to our space of vector fields. The steepest descent scheme (Algorithm 1.1.7) presented in Section 1.1 of Chapter VIII of [16] corresponds to the iterative process we consider below.