Explore chapters and articles related to this topic
Small Worlds and Large Worlds
Published in Richard McElreath, Statistical Rethinking, 2020
We’ll use quadratic approximation for much of the first half of this book. For many of the most common procedures in applied statistics—linear regression, for example—the approximation works very well. Often, it is even exactly correct, not actually an approximation at all. Computationally, quadratic approximation is very inexpensive, at least compared to grid approximation and MCMC (discussed next). The procedure, which R will happily conduct at your command, contains two steps.Find the posterior mode. This is usually accomplished by some optimization algorithm, a procedure that virtually “climbs” the posterior distribution, as if it were a mountain. The golem doesn’t know where the peak is, but it does know the slope under its feet. There are many well-developed optimization procedures, most of them more clever than simple hill climbing. But all of them try to find peaks.Once you find the peak of the posterior, you must estimate the curvature near the peak. This curvature is sufficient to compute a quadratic approximation of the entire posterior distribution. In some cases, these calculations can be done analytically, but usually your computer uses some numerical technique instead.
Collective Intelligence in Networking
Published in Phan Cong Vinh, Nature-Inspired Networking: Theory and Applications, 2018
In computer science, hill climbing is a mathematical optimization technique that belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. If the change produces a better solution, an incremental change is made to the new solution, repeating until no further improvements can be found. For example, hill climbing can be applied to the travelling salesman problem. It is easy to find an initial solution that visits all the cities but will be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained [64].
Miscellaneous Algorithms
Published in Nazmul Siddique, Hojjat Adeli, Nature-Inspired Computing, 2017
Local search is carried out on the current best solution to exploit the search space and to find a local optimum. Hill climbing is a good local search method for finding a local optimum (a solution that cannot be improved by considering a neighboring configuration) but cannot guarantee to find the best possible solution (the global optimum) out of all possible solutions within the search space. Local search used in GbSA is a modified hill-climbing search algorithm, which ensures the exploitation of search space (Aarts and Lenstra, 1997). In the modified hill-climbing method, two solutions are created in the neighborhood of the current best solution provided by the global search methods such as spiral chaotic move. If they are no better than the current best solution x, then the current best solution is returned. Otherwise, a local search exploits the space around the current best solution with small step sizes and gradually increases the step sizes for faster exploration and returns the locally best solution. The local search method consists of the following steps:
Mode choice in strategic freight transportation models: a constrained Box–Cox meta-heuristic for multivariate utility functions
Published in Transportmetrica A: Transport Science, 2022
Generally, once all the solutions around an initial solution explored, the strategy used in the hill-climbing algorithms is to pursue the exploration towards the solution that presents the highest improvement (steepest climb). Such an approach implicitly considers that there exists a continuum between all the valid solutions. However, it already has been pointed out that this is not always the case for the problem discussed in this paper. Therefore, the algorithm tests all the solutions in each dimension and further explores all the solutions with the expected signs having a higher Log-Likelihood than the current solution. As multiple search paths are explored, the algorithm must ‘remember’ the solutions to (re)start from. Consequently, the algorithm belongs to the family of hill climbing with backtracking capabilities heuristics (Witten, Frank, and Hall 2011).
Toward an Agent-Based Blackboard System for Reactor Design Optimization
Published in Nuclear Technology, 2022
Ryan Stewart, Todd S. Palmer, Samuel Bays
For local searches, the steepest-ascent hill-climbing algorithm, neighborhood-search algorithm, and a modified NSGA-II are used. The hill-climbing algorithm selects an approximate optimal point and then makes incremental steps toward design variables that further optimize the objective functions.33 The neighborhood-search algorithm selects an approximate optimal point, applies a small step change to a design variable, and determines the objectives for the new design.33 A modified NSGA-II algorithm from pymoo is also implemented with a fixed population size (set to 40), and the algorithm is terminated after a set number of function evaluations (set to 800) (Ref. 14). As the NSGA-II algorithm examines designs, it writes these designs to the blackboard. Upon implementation of the full MABS, other agents will be able to utilize, in tandem, the designs that the various KA-S agents produce. The local search methods select solutions present in the Pareto Front Level and perform a local search to further explore the PF. Utilizing multiple local and global search methods enables the rapid establishment of the PF, with subsequent examination to ensure the PF is thoroughly described.
Optimisation problems and resolution methods in satellite scheduling and space-craft operation: a survey
Published in Enterprise Information Systems, 2021
Simulated Annealing (SA) generalises Hill Climbing with the aim to overcome the premature convergence to local optima. Indeed, by moving always to neighbouring solutions of better fitness, HC soon reaches to a solution where improvements are no more possible. This new mechanism in SA is implemented via a cooling procedure, namely, using a temperature parameter. According to the cooling mechanism, high-temperature values almost all candidate neighbouring moves can be accepted, while for low-temperature values neighbouring solution selection is more restrictive. The acceptability criterion uses both the function and a temperature parameter . Then, a new move is accepted with probability if . Clearly, the larger the temperature value , the larger is the probability of accepting a new move. The algorithms starts with a high value of and keeps decreasing its value systematically at each iteration of the algorithm according to a priori established cooling procedure. It should, therefore, be noted that tuning the cooling procedure directly affects the convergence of the SA algorithm. A standard template of SA can be found in Alg. 2.