Explore chapters and articles related to this topic
Precast segmental bridge construction in seismic zones
Published in Fabio Biondini, Dan M. Frangopol, Bridge Maintenance, Safety, Management, Resilience and Sustainability, 2012
Fabio Biondini, Dan M. Frangopol
In particular, one of the most important aspects is the Bayesian optimization can be used for the selection of the architecture of the optimal model. In fact, for the neural network models, the number of adaptive parameters of the network model, i.e. the model class, has to be fixed in advance, and this choice has a fundamental importance. It is not correct to choose simply the model that fits the data better: more complex models will always fit the data better but they may be over -parameterized and so they make poor predictions for new cases.
Machine Learning
Published in Ravi Das, Practical AI for Cybersecurity, 2021
As it will be further reviewed in the next subsection, Bayesian Optimization is used for what is known as “Hyperparameter Optimization.” This technique helps to discover the total number of possible outcomes that can happen for all of your datasets that you are making use of in your Machine Learning system. Also, probabilistic measures can be used to evaluate the robustness of these algorithms. One such other technique that can be used in this case is known as “Receiver Operating Characteristic Curves,” or “ROC” for short.
Genetic Algorithm
Published in Kaushik Kumar, Divya Zindani, J. Paulo Davim, Optimizing Engineering Problems through Heuristic Techniques, 2020
Kaushik Kumar, Divya Zindani, J. Paulo Davim
Probabilistic model building techniques: The prominent models include population-based incremental learning (Baluja, 1994), the compact GA (Harik et al., 1999), the Bayesian optimization algorithm (Pelikan et al., 2000), the hierarchical Bayesian optimization algorithm (Pelikan and Goldberg, 2001), etc.
Automated microaneurysms detection in retinal images using SSA optimised U-NET and Bayesian optimised CNN
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
The objectivefunction in Equation (10) is estimated by a surrogate function with a probability distribution in our research. This problem is then presented within the context of Bayesian optimisation. This is measured as the preceding distribution in the revised Bayes theorem mentioned in Equation (11). The CNN hyper – parameters serve as the search space in our scenario. Bayesian optimisation employs an acquisition function to sample the search space and choose the sample locations that optimise the prior distribution in the search for the objective function minimum. Such sample data are then assessed using the original objective function Oi = O(xi), generating the objective values set O = O1,O2,…, On from the search space, mentioned as x1, x2,…,xn with n samples. The samples and evaluations are obtained sequentially to create a collection of data points called D = xi,O(xi),… xn,O(xn), that is then utilised to update the surrogate function (prior distribution) to create the posterior of Equation (11). A Gaussian process model with a covariance kernel function K and a mean µ is used in this study to model the objective function. According to (Astudillo and Frazier 2021), the objective function values’ joint prior distribution is multivariate Gaussian.
Sequential Designs for Filling Output Spaces
Published in Technometrics, 2023
Shangkun Wang, Adam P. Generale, Surya R. Kalidindi, V. Roshan Joseph
Bayesian optimization (Garnett 2023) is a popular technique for the global optimization of expensive black-box functions. The key idea in Bayesian optimization is to introduce an acquisition function that includes not only the function value but also its uncertainty estimate. Expected improvement (EI) criterion (Jones, Schonlau, and Welch 1998) is one such acquisition function. The EI criterion encourages the design points to explore the experimental region while exploiting the function, which aids in jumping out of local regions and enable the design points to move toward the global optimum. The EI algorithm uses Gaussian process (GP) modeling, which automatically gives the uncertainty estimates alongside predictions. However, as mentioned in the introduction, the high training cost of GP models can become a computational bottleneck.
Hierarchical active learning for defect localization in 3D systems
Published in IISE Transactions on Healthcare Systems Engineering, 2023
In recent decades, Bayesian optimization has emerged as a powerful technique for the efficient optimization of expensive black-box functions. By using probabilistic surrogate models such as GPs, Bayesian optimization efficiently navigates the search space to identify the optimal solution while minimizing the number of calls to computationally demanding simulation models (Brochu et al., 2010; Srinivas et al., 2009). Bayesian optimization has been applied to solve a variety of inverse problems (Deng et al., 2020; Huang et al., 2021; Winter et al., 2023). Researchers have also extended the standard Bayesian optimization framework to accommodate multi-task problems, incorporating various surrogate structures to better represent the relationships among different tasks, datasets, and functions (Astudillo & Frazier, 2019; Bonilla et al., 2007; Shen et al., 2022; Swersky et al., 2013).