Explore chapters and articles related to this topic
Material Requirement Planning
Published in Susmita Bandyopadhyay, Production and Operations Analysis, 2019
Particle Swarm Optimization (PSO) is based on the swarm behavior of different insects. In this algorithm, the word “swarm” indicates the population of solutions. The basic idea is to find the better particles after a certain number of iterations (generations) of the algorithms, leading to the optimal solution at the last. Differential Evolution (DE) identifies differences among individual solutions for mutation. Therefore, genetic operators are applied for DE as well. Artificial Immune Algorithm (AIA) is based on the immunity system in animal bodies. Here, “antigen” represents the worse solutions and “antibody” represents the better solution. The basic idea is to replace the antigens (worse solutions) by antibodies (better solutions) and proceed toward optimality. Ant Colony Optimization (ACO) is nature-based optimization algorithms based on the colonial behavior of ants. Simulated Annealing (SA) is a type of simulation of the physical annealing process in which the metals are alternately heated and cooled in order to get rid of the brittleness. This algorithm is a modification of the famous Metropolis-Hastings algorithm (MHA). Tabu Search (TS) algorithm searches for the nearest neighboring solution that has the smallest cost.
Adaptive meta-heuristic to predict dent depth damage in the fixed offshore structures
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
W. Punurai, M.S. Azad, N. Pholdee, C. Sinsabvarodom
To minimize the objective function, five well-established self-adaptive meta-heuristics (MHs) are used. Details and notations of these methods are available in the literature in the corresponding references of the methods and will not be detailed here. The MHs used include: – Adaptive Differential Evolution (JADE) (Zhang & Sanderson, 2009).– Evolution Strategy with Covariance Matrix Adaptation (CMAES) (Hansen, Muller et al., 2003).– Success-History Based Adaptive Differential Evolution (SHADE) (Tanabe & Fukunaga,2013).– SHADE with Linear Population Size Reduction (L-SHADE) (Tanabe & Fukunaga, 2014)– Adaptive Sine Cosine algorithm with integrating Differential Evolution mutation (ASCDE) (Bureerat and Pholdee, 2017)
Mathematica and Optimization
Published in Levent Aydin, H Seçil Artem, Selda Oterkus, Designing Engineering Structures Using Stochastic Optimization Methods, 2020
Melih Savran, Harun Sayi, Levent Aydin
Differential evolution (DE) is one of the most common stochastic search algorithms in the optimization and solution of complicated and challenging design problems. As mentioned in the previous chapter of this book (Chapter 2), the algorithm is built on four main steps which are initialization, mutation, crossover, and selection. Although DE is an efficient search algorithm thanks to covering a population of solutions in iterations rather than a single solution, it computationally requires more process time which makes it an expensive method. DE is a robust and reliable algorithm to obtain global optimum. However, there is uncertainty finding global optimum points as also valid for other types of search methods [7].
Feature Selection for Supervised Learning and Compression
Published in Applied Artificial Intelligence, 2022
Phillip Taylor, Nathan Griffiths, Vince Hall, Zhou Xu, Alex Mouzakitis
Other optimization approaches include particle swarm optimization (PSO), which iteratively updates candidate vectors in a population based on both local and global information (Kennedy and Eberhart 1995). For feature selection using binary PSO, a candidate vector is a probabilistic representation of a feature being selected, that are collapsed onto either or for feature set evaluation using a random threshold (Chuang et al. 2008). Similar to this, brain storm optimization iteratively clusters candidates, generates new ones, and selects those with the best performance for the next iteration (Papa et al. 2018). Differential evolution approaches also iteratively create new populations through a process of mutation, crossover, and selection (Zhang et al. 2020).
A hybrid differential evolution for multi-objective optimisation problems
Published in Connection Science, 2022
In order to solve multi-objective optimisation problems by using DE effectively, a hybrid multi-objective differential evolution (HMODE/D), based on decomposition, is proposed in this paper. The handling schemes of HMODE/D include (1) the probability value for every neighbour individual is set by using the rank sum, and the local optimal is selected by the roulette wheel. Then, one mutation operator is designed by the local optimal. Besides, one heuristic crossover operator is constructed by using the method of uniform design. (2) The search ability of HMODE/D is improved using the self-adaptive adjustment strategy; the mutation factor is adjusted by the norms of vectors satisfying certain conditions. (3) Parent individuals are updated using decomposition; during the decomposition external archive for each individual can be constructed, i.e. the weight values of neighbourhood individuals are compared with those of offspring, during which individual with less significant weight value is selected and stored in the archive of the corresponding individual. Then, an individual, which is beneficial to the convergence of DE, is selected from the external archive to execute the mutation operation. The effectiveness of HMODE/D is verified by comparing it with the latest algorithm. The related experimental results show that the performance of HMODE/D has advantages compared with other algorithms.
Another evolution of generalized differential evolution: variable number of dimensions
Published in Engineering Optimization, 2022
Other modifications of Differential Evolution (DE) can also be found in the literature. Grammatical Differential Evolution (GDE) was used for evolutionary programming in O'Neill and Brabazon (2006). Variable-size multi-objective DE proposed in Chen et al. (2015) utilizes a secondary vector to handle the number of active decision variables. Note that this is the same impure-VND approach as in VLMOPSO and VLGDE3. In Wang et al. (2018), DE is used to design a deep convolutional neural network. Although the representation of VND is pure, its first crossover operator inevitably discriminates in favour of the higher dimensionality solutions, while the second, GA-like crossover operator, tries to counter the fact. Also, the binary representation in the originally real-coded algorithm seems to be unreasonable. Moreover, this algorithm has only a single objective.