Explore chapters and articles related to this topic
Engineering Optimization
Published in Ganesh M. Kakandikar, Vilas M. Nandedkar, Sheet Metal Forming Optimization, 2017
Ganesh M. Kakandikar, Vilas M. Nandedkar
Various instances of these techniques have been proposed in the literature such as the conjugated gradient method and sequential quadratic programming (SQP). Though these algorithms were initially restricted to continuous unconstrained problems, they have been successfully extended to other fields of optimization.Constrained optimization: Lagrange multipliers allow the user to take constraints (equalities and inequalities) into account.Discrete optimization: In this approach, the problem is solved in a dual space to deal with discrete variables. Some interesting applications in structural optimization (sizing of thin-walled structures, geometrical configuration of trusses, topological optimization of membranes or 3D structures, etc.) can be solved.Unfortunately, as mentioned earlier, all algorithms based on the classical gradient techniques require the functions to be differentiable, which is often not the case in design optimization problems. Furthermore, even if the functions were differentiable, the risk of being trapped in a local minimum is high. Therefore, global methods seem more suited to solve general design optimization problems. The main global approaches are summarized in the next section.
Introduction
Published in B K Bala, Energy Systems Modeling and Policy Analysis, 2022
Linear programming is the most powerful method of constrained optimization. The standard form of linear programming model consists of two parts, an objective function and constraints. The objective function is an equation that defines the quantity to be optimized. For linear programming, this quantity must be a one-dimensional scalar quantity such as: Z=∑CiXi
Simulation Tools
Published in Harold Klee, Randal Allen, Simulation of Dynamic Systems with MATLAB® and Simulink®, 2018
Optimum seeking methods search for extreme values (minima and maxima) of an objective function using the gradient vector in some way (Converse 1970; Miller 1975, 2000; Hasdorff 1976; Daniels 1978; Bryson 1999). The references include both constrained and unconstrained optimization problems. In constrained optimization, a subset of the parameters are constrained in some fashion limiting the region of feasible solutions for finding the optimum. Typically, the constraints are inequalities reflecting limitation of system resources or existence of physical boundaries for safe operation.
Optimization of complex part-machining services based on feature decomposition in cloud manufacturing
Published in International Journal of Computer Integrated Manufacturing, 2020
Liang Guo, Yang Xu, Wei He, Yunxi Cheng
Through the above method, the multi-objective problem is converted into a single-objective problem, one of which is the optimization target and the other is the constraint condition. The traditional methods for solving constrained optimization problems are simplex method, steepest descent method, Newton method, conjugate gradient method, pattern search method, penalty function method, Lagrange multiplier method, etc. The basic idea is to convert dynamics into static, convert multi-objective into single-objective. However, these methods have certain scope and limitations. Most of them require the gradient information of the problem, and the objective function or constraint is required to be continuously differentiable. Therefore, in order to satisfy the feasibility of solving the problem, only the local optimal solution can be obtained.
Strong convergence theorems for solving pseudo-monotone variational inequality problems and applications
Published in Optimization, 2022
Constrained optimization has many applications in scientific and engineering areas, such as signal and image processing, manufacturing, optimal control, and pattern recognition. Since the concept of fuzzy sets was introduced by Zadeh [22], many mathematical programming topics were extended in a fuzzy way. Solving fuzzy optimization problems is now under the spotlight of investigations. Over the past decades, there are a wide variety of both practically and theoretically relevant numerical algorithms for solving fuzzy optimization problems, see [23,24]. In this paper, we apply our algorithms to solve fuzzy programming problems. The convergent point corresponds to the approximate or exact optimal solutions of fuzzy programming problems.
Comparison of TVcDM and DDcTV algorithms in image reconstruction
Published in Inverse Problems in Science and Engineering, 2020
Zhiwei Qiao, Gage Redler, Shaojie Tang, Zhiguo Gui
Though we cannot mathematically prove that TVcDM should have a faster convergence rate than DDcTV, we may try to give some qualitative explanation. The constrained optimization problem means selecting a solution of minimal objective function value from the convex set defined by the constraint term. The constraint term of DDcTV is the data fidelity term, whereas that of TVcDM is the TV regularization term. TVcDM model means we know or we may guess the TV value of the reconstructed image. It is a reasonable deduction that we may fast find the reconstructed image if we have known its TV value. This is why TVcDM is much faster than DDcTV.