Explore chapters and articles related to this topic
Genetic Algorithm
Published in Kaushik Kumar, Divya Zindani, J. Paulo Davim, Optimizing Engineering Problems through Heuristic Techniques, 2020
Kaushik Kumar, Divya Zindani, J. Paulo Davim
Probabilistic model building techniques: The prominent models include population-based incremental learning (Baluja, 1994), the compact GA (Harik et al., 1999), the Bayesian optimization algorithm (Pelikan et al., 2000), the hierarchical Bayesian optimization algorithm (Pelikan and Goldberg, 2001), etc.
Feature extraction and qualification
Published in Ruijiang Li, Lei Xing, Sandy Napel, Daniel L. Rubin, Radiomics and Radiogenomics, 2019
Clustering is another feature extraction algorithm which aims at finding relevant features and combining them by their cluster centroid based on some similarity measure. Similar with the supervised wrapper methods mentioned before, where the prediction performance can be used as a guide for the selection of feature subsets, a criterion that evaluates the quality of the feature set generated by some clustering algorithm is necessary. Two popular clustering methods are k-means and hierarchical clustering. K-means algorithm is an iteratively expectation-maximization (EM) method, given number of clusters and a set of points in a Euclidean space, aiming to minimize the total sum of the distances of each point to the nearest cluster. Thus, the optimal k clusters that group the data will be found. Dy and Brodley have reported a feature subset selection using EM clustering (FSSEM) with two criteria—scatter separability and maximum likelihood criterion (Dy and Brodley 2000). The scatter separability used in discriminant analysis is used for the former criterion to evaluate the feature subsets based on how good they can group the data into clusters that are separable. The latter criterion is based on a Gaussian assumption of the clusters, it tells how good the model we build using certain features, which is a Gaussian mixture, fits the data. Furthermore, using their framework, different combination of clustering algorithms and criteria can be explored. Hong et al. came up with a clustering ensembles guided feature selection algorithm (CEFS) that combines the clustering ensembles and population-based incremental learning algorithm (Hong et al. 2008). Different clustering methods might give contradictory results. Clustering ensembles method leverages various clustering solutions and achieves a more robust consensus result. So CEFS searches for a subset of features that the clustering algorithm trained on this can achieve the most similar solution with the ensembled one. Dhillon et al. developed an information-theoretic feature clustering method by minimizing the mutual information measured by generalized Jensen-Shannon divergence (Dhillon et al. 2003).
Optimization of the p-xylene oxidation process by a multi-objective differential evolution algorithm with adaptive parameters co-derived with the population-based incremental learning algorithm
Published in Engineering Optimization, 2018
DE is a population-based evolutionary algorithm. The feedback information in the evolutionary process is probabilistic, and randomness exists. Building a model to represent this information could help to enhance the performance of the DE algorithm. The population-based incremental learning algorithm (PBIL) is a combination of the genetic algorithm and statistical knowledge obtained by building a probability model based on superior individuals to generate new offspring. It is simple, fast and accurate, and can deal with many problems. Assessing the fitness degree of parameters depends on the fitness of the corresponding individuals. The PBIL algorithm co-evolved with the MODE algorithm was adopted in this study to implement a self-adaptive strategy on the parameters in the MODE algorithm. The parameters were set as a symbiotic individual that co-evolved with the original individual and coded as binary strings, where each individual has its own parameters. The evolution of the original population implemented by MODE and PBIL was applied to deal with the information of the corresponding symbiotic individuals of the superior original individuals in each generation simultaneously. Two algorithms were designed as a co-evolutionary system; these two share the optimal information. The proposed algorithm, population-based incremental learning–multi-objective differential evolution (PBMODE), can help to obtain a set of compromise solutions to guide PTA production successfully.
A population-based incremental learning algorithm to identify optimal location of left-turn restrictions in urban grid networks
Published in Transportmetrica B: Transport Dynamics, 2023
Murat Bayrak, Zhengyao Yu, Vikash V. Gayah
This section describes the methods that were used in this paper to determine where left-turns should be restricted spatially across an urban network. We first introduce the network settings used in this work, and then discuss the two optimization approaches applied. The first is population-based incremental learning (PBIL), which is a general genetic optimization method that can be applied network optimization problems like this one. The second is a brute-force enumeration, which is used to assess the effectiveness of PBIL at solving this problem.