Explore chapters and articles related to this topic
Parameter Estimation
Published in Alex Martynenko, Andreas Bück, Intelligent Control in Drying, 2018
In contrast, evolutionary algorithms (Bäck, 1996), with genetic algorithms and differential evolution algorithms as the most popular representatives of the metaheuristic techniques, are based on the main concept of survival of the fittest: The cost function is evaluated for a population of individual parameter estimates. The most fit individuals (i.e., the ones with the lowest values of the cost function) are the most probable to be part of the next population. Variance in the new populations is guaranteed applying analogues of mutation and crossover. Stochastic optimization algorithms also include metaheuristic techniques which mimic biological and physical phenomena, for example, ant colony optimization, particle swarm optimization, and simulated annealing. Though these techniques do increase the chance of finding the global optimum, it can also not be guaranteed that a found solution is globally optimal. Hybrid optimization algorithms aim at combination of the advances of deterministic and stochastic algorithms.
Online optimization algorithms
Published in Xiaobiao Huang, Beam-based Correction and Optimization for Accelerators, 2019
Stochastic optimization algorithms often have poorer efficiency than the deterministic methods in terms of the number of required function evaluations to converge to the optimum. This is understandable as these methods are not “greedy”: they do not intend to take the shortest paths toward the minimum. As the new trial solutions are somewhat randomly chosen, the outcome of function evaluations is not always an improvement. However, there is a benefit at the cost of the efficiency loss – the stochastic algorithms often have a better chance of finding the global optimum. By allowing taking steps in bad directions and using random sampling, the stochastic algorithms are not as easily attracted to the local minima as the deterministic algorithms are.
Introduction
Published in A Vasuki, Nature-Inspired Optimization Algorithms, 2020
Deterministic and Stochastic Optimization: Deterministic optimization algorithms are consistent, and each time the algorithm is run, it produces the same result for the same input values. Stochastic optimization algorithms have some randomness associated with them and might produce different results for each run of the algorithm. An example for the deterministic optimization algorithm is hill climbing and other traditional gradient-based methods such as the simplex method. Genetic algorithm and particle swarm optimization are two famous examples of population-based stochastic optimization techniques. Stochastic optimization algorithms are heuristic or metaheuristic and have some inherent randomness that leads to the optimum or near-optimum solution in finite time.
The impacts of longitudinal separation, efficiency loss, and cruise speed adjustment in robust terminal traffic flow problem under uncertainty
Published in Transportation Letters, 2023
Kam K.H. Ng, Felix T.S. Chan, Yichen Qin, Gangyan Xu
As mentioned in Constraint (23) and Objective function (43) of the Robust-TTFP, the arrival time at entry waypoint for each flight in each scenario is denoted as , where is the value after adding a value generated from empirical PDF from Figure 11. It is worth noting that the computation time with a very large sample size will increase exponential for NP-hard problem (K. K. H. Ng et al. 2020b; K. K. H.2018; K. K. H.2017). The true optimal value can be obtained when . Intuitively, solving such stochastic optimization problem is computationally intractable. SAA offers high quality solutions with the consideration of statistical performance bound and can yield a solution that satisfies the computational needs of the practitioners (Kleywegt, Shapiro, and Homem-de-Mello 2002). Various engineering applications, including robust liner shipping services, supply chain networks with disruption and stochastic personnel assignment, have adopted SAA or variances of SAA to solve the stochastic optimization problem (Xiaojun Chen, Shapiro, and Sun 2019; Li and Zhang 2018; Pour, Naji-Azimi, and Salari 2017; Singham 2019; Wang and Meng 2012). SAA is still a promising research area for solving the stochastic optimization problem with Monte Carlo simulation (Högdahl, Bohlin, and Fröidh 2019). The pseudo code of the SAA approach is stated in Algorithm 1.
Robust binary linear programming under implementation uncertainty
Published in Engineering Optimization, 2022
Jose Ramirez-Calderon, V. Jorge Leon, Barry Lawrence
Different approaches aim to protect the optimality and feasibility of solutions in the face of uncertainties including stochastic optimization—e.g. Dantzig (1955), Beale (1955) and Wets (1966, 1974, 1983) —and robust optimization—e.g. Soyster (1973), Mulvey, Vanderbei, and Zenios (1995) and Bertsimas and Sim (2004). Stochastic optimization seeks solutions that remain optimal and feasible with high probability. However, there may exist realizations of the uncertainty where the optimality or feasibility are not satisfied—see Ben-Tal, Ghaoui, and Nemirovski (2009). On the other hand, robust optimization approaches seek solutions that satisfy the given levels of optimality and feasibility for any realization of the uncertainty; such solutions are termed robust solutions (Mulvey, Vanderbei, and Zenios 1995). For instance, Soyster (1973) considers perturbations in the coefficients of the constraints using convex sets; the resulting model produces solutions that are feasible for any realization of the data within the convex sets.
Three-Periods Optimization Algorithm: A New Method for Solving Various Optimization Problems
Published in IETE Journal of Research, 2022
Mohammad Dehghani, Pavel Trojovský, Štěpán Hubálovský, Theyab R. Alsenani, Jaswinder Singh
Stochastic optimization methods are able to provide appropriate solutions to optimization problems without using the derivative and gradient information of the objective function and solely by using random operators. Optimization algorithms, which are one of the most widely used and effective methods in this group, are able to optimize the objective function in a repetition-based process and based on random scanning of the problem search space. The process of achieving a solution in optimization algorithms is such that the first a certain number of possible solutions to the problem are proposed. These proposed solutions are then improved in an iterative process based on the steps of the algorithm. The criterion of the goodness of a solution is the value of the objective function. The process of improving the proposed solution continues until the algorithm stop condition is reached. Finally, and after the full implementation of the algorithm on the optimization problem, the best-proposed solution obtained by the algorithm is available [6].