Explore chapters and articles related to this topic
Nonlinear Optimization
Published in Jeffery J. Leader, Numerical Analysis and Scientific Computation, 2022
Consider again Eq. (2.2). The method of steepest descent proceeds by selecting a search direction −∇F(xk), stepping along it until it finds a local minimum in that direction (at which point it will be tangent to a contour of F), then repeating this procedure from that point. Although it may make many seemingly unnecessary turns and can stagnate in a region where ‖∇F(x)‖ is small but nonzero, it is very robust, and typically it converges to a stationary point of F (that is, a point where ∇F(x)=0). Convergence to a stationary point that is neither a minimum nor a maximum is uncommon though the method may be very slow while it is near such a point.
Search Methods
Published in Theodore Louis, Behan Kelly, Introduction to Optimization for Environmental and Chemical Engineers, 2018
Summarizing, the method of steepest descent is a gradient operation where a step size is chosen to achieve the maximum amount of decrease of the objective function for each individual step. The method of steepest descent is an example of an iterative algorithm in which the operation generates a sequence of points, each calculated on the basis of the points preceding it. The operation is a decent method because the corresponding value of the objective value decreases in value as each new point is generated by the algorithm. The steepest descent algorithm proceeds as follows: at each step, starting from the point. x1, a line search until a minimizer, x2, is located. A typical sequence is provided later in Figure 4.4. Note that the method of steepest descent moves in orthogonal steps.
Algorithmic Optimization
Published in Abul Hasan Siddiqi, Mohamed Al-Lawati, Messaoud Boulbrachene, Modern Engineering Mathematics, 2017
Abul Hasan Siddiqi, Mohamed Al-Lawati, Messaoud Boulbrachene
In spite of its local optimal descent property the method of steepest descent often performs poorly. There is, however, a class of first order fine search descent methods, known as conjugate gradient methods^for which it can be proved that a method from this class will converge exactly in a finite number of iterations when applied to a positive-definite quadratic function of the form f(x)=12xTAx−bTx+a where A is a positive-definite nxn real symmetric matrix.
Using hotspot analysis to establish non-equidistant cooling channels automatically
Published in Journal of the Chinese Institute of Engineers, 2019
Yuan-Ping Luh, Huang-Li Wang, Hong-Wai Iao, Tsu-Chen Kuo
The following section presents studies related to algorithms for developing conformal cooling channels. Lin (2002) developed a global optimization algorithm to determine the optimal designing parameters for simulated annealing; minimizing the energy required. Li, Zhao, and Guan (2009) used a genetic algorithm to optimize the design of heating channels. In a genetic algorithm, the optimal solution is referred to as an individual, which comprises chromosomes or genotypes, representing a design variable. The chromosomes are usually presented as arrays of bits or numbers. First, the algorithm randomly generates certain numbers of individuals to improve the quality of the initial population. One individual in each generation is assessed, and a fitness function value is obtained from a fitness function. Populations are sorted according to fitness function values; populations with higher values are listed first. Rhee et al. (2010) proposed a first-order optimization method, gradient optimization, which is often known as the method of steepest descent. To determine the local minimum of a function, an iterative search must be performed to obtain the step size specific to the negative of the gradient (or approximate gradient) of the function at the current point. Au, Yu, and Chiu (2011) used a visibility-based method to place point light sources in a mold cavity; the illuminated areas represented the effective regions of cooling channels. Wang et al. (2011a) applied particle swarm optimization to a heating/cooling channel design. Particle swarm optimization is a population-based optimization method that moves a population to the optimal position based on population fitness. However, this method does not adopt evolution operators individually but regards every individual as a particle (point) without volume in a search space of D dimensions. The particle flies at a specific speed in the search space, and the speed is dynamically adjusted according to the flying experiences of it or its peers.
Optimization of the esterification process of crude jatropha oil (CJO) containing high levels of free fatty acids: a Malaysian case study
Published in Biofuels, 2020
Hazir Farouk, Seyed Mojib Zahraee, A.E. Atabani, Mohammad Nazri Mohd Jaafar, Fatah H. Alhassan
In this work the method of steepest descent is used for moving sequentially in the direction of maximum decrease in the response. To do this the first regression model is used as the path of steepest descent of the line through the center of the region: