Explore chapters and articles related to this topic
2 injection processes and storage
Published in Xia-Ting Feng, Rock Mechanics and Engineering, 2017
L. Ribeiro e Sousa, R. Leal e Sousa, Eurípedes Vargas, Raquel Velloso, Karim Karam
The most straightforward way to make inference in a BN, if efficiency were not an issue, would be to use the equations above to compute the probability of every combination of values and then marginalize out the ones one needed to get a result. There are several algorithms for efficient inference in BN, and they can be grouped as follows: exact inference methods and approximate inference methods. The most common exact inference method is the Variable Elimination algorithm that consists of eliminating (by integration or summation) the non-query, non-observed variables one by one by summing over their product. This approach takes into account and exploits the independence relationships between variables of the network. Approximate inference algorithms are used when exact inference may be computationally expensive, such as in temporal models, where the structure of the network is very repetitive, or in highly connected networks.
Machine Learning
Published in Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza, Industrial Applications of Machine Learning, 2019
Pedro Larrañaga, David Atienza, Javier Diaz-Rozo, Alberto Ogbechie, Carlos Puerto-Santana, Concha Bielza
Approximate inference methods balance the accuracy of the results and the capability to deal with complex models, where exact inference is intractable. Like exact inference, approximate inference is also NP-hard in general BNs (Dagum and Luby, 1993). The most successful idea is to use stochastic simulation techniques based on Monte Carlo methods. The network is used to generate a large number of cases (full instantiations) from the JPD. The probability is then estimated by counting observed frequencies in the samples. As more cases are generated, the exact probability will be better approximated (by the law of large numbers).
Signature Generation Algorithms for Polymorphic Worms
Published in Mohssen Mohammed, Al-Sakib Khan Pathan, Automatic Defense Against Zero-day Polymorphic Worms in Communication Networks, 2016
Mohssen Mohammed, Al-Sakib Khan Pathan
Bayesian parameter inference in the incomplete data case is also substantially more complicated. The parameters and missing data are coupled in the posterior distribution, as can be seen by multiplying Equation 7.46 by the parameter prior and normalizing. Inference can be achieved via approximate inference methods such as MCMC methods [89] like Gibbs sampling.
Bayesian Network–Based Fault Diagnostic System for Nuclear Power Plant Assets
Published in Nuclear Technology, 2023
Xingang Zhao, Xinyan Wang, Michael W. Golay
For networks with large, complex structures, exact inference may sometimes be too slow. The approximate inference algorithms trade off accuracy with computational cost. Two main families of approximate algorithms exist: sampling methods and variational methods. The sampling methods perform inference based on samples that are randomly generated from a complex joint distribution of interest, and the distribution is then approximated on the base of the random samples. These methods generally rely on core techniques from statistics such as forward sampling, importance sampling, and instances of the Markov chain Monte Carlo (MCMC) technique like Metropolis-Hastings sampling or Gibbs sampling.