Explore chapters and articles related to this topic
Bayesian Classification of Genomic Big Data
Published in Ervin Sejdić, Tiago H. Falk, Signal Processing and Machine Learning for Biomedical Big Data, 2018
Ulisses M. Braga-Neto, Emre Arslan, Upamanyu Banerjee, Arghavan Bahadorinejad
Bayesian analysis for complex models used in recent applications involve intractable likelihood functions, which has prompted the development of new algorithms generally called approximate Bayesian computation (ABC). In this approach, one generates candidate parameters by sampling from the prior distribution and creating a model-based simulated data set. If the data set conforms to the observed data set, the candidate can be retained as a sample from the posterior distribution. Thus, one can avoid evaluating the likelihood function, which is essential for classical Bayesian posterior simulation methods. The ABC approach can be implemented via rejection sampling, MCMC, and sequential Monte Carlo methods [22]. Utilizing the LC-MS proteomics model described in the last section, we first do prior calibration of the hyperparameters using an ABC approach via rejection sampling and then use the ABC method implemented via an MCMC procedure to obtain samples from the posterior distribution of the protein concentrations in order to derive the ABC-MCMC classifier for LC-MS data.
Glossary of scientific and technical terms in bioengineering and biological engineering
Published in Megh R. Goyal, Scientific and Technical Terms in Bioengineering and Biological Engineering, 2018
Approximate Bayesian computation (ABC) is a statistical framework using simulation modeling to approximate the Bayesian posterior distribution of parameters of interest often by using multiple summary statistics.
Parametric analysis of time-censored aggregate lifetime data
Published in IISE Transactions, 2020
Piao Chen, Zhi-Sheng Ye, Qingqing Zhai
ABC is a Bayesian approach that does not require the specification of a likelihood function. Given a prior for the parameters the ABC algorithm proceeds in the following way. First, we sample a candidate parameter from With this candidate a dataset is generated from the assumed model that has the same generating mechanism as the observed dataset so that the distributional properties of the simulated data can match those of the observed data We then compute the distance between the simulated data and the observed data where the distance function ρ is usually the difference between some summary statistics. If is not larger than a prefixed then the simulated data is “close enough” to the observed data and thus we keep as a sample from the posterior. Otherwise, is rejected. A flowchart of the ABC sampling procedure is shown in Figure 3.
Improved calibration of building models using approximate Bayesian calibration and neural networks
Published in Journal of Building Performance Simulation, 2023
Approximate Bayesian Computation (ABC), or likelihood-free inference, is an emerging method which is used for situations where a probabilistic likelihood function is difficult or impossible to define (Sunnåker et al. 2013). ABC methods still follow the same ideology as the seminal KOH method, however, the use of GPs is replaced with a generic model and a predefined distance metric to approximate the likelihood function.
Probabilistic inference of reaction rate parameters from summary statistics
Published in Combustion Theory and Modelling, 2018
Mohammad Khalil, Habib N. Najm
The DFI technique enforces the strong expectation constraint (3), albeit as relaxed for computational tractability, by allowing for some deviation between the computed expectation, F(x), and the specified F. The method involves constructing a joint PDF for the data vector x and parameter vector θ given summary statistic F based on ABC [49]. ABC methods are predominantly used for Bayesian inference when the likelihood function is either computationally expensive to evaluate or cannot be formulated explicitly. In this context, ABC provides a likelihood function for the data given the summary statistic by relaxing the expectation constraint, Equation (3), employing a kernel density. Our choice for the ABC data likelihood function is a Gaussian kernel function given by [35]: with being the expectation of interest for a given data realisation x; the term is a conditional prior in the ME context (prior to assimilating available information in the form of summary statistics) and is simply the classical Bayesian posterior PDF of the parameter vector θ given the data vector x, and thus the subscript will be dropped in subsequent references; δ is some positive constant that dictates the level of consistency of the proposed data sets with the given statistics; and z is a normalisation factor. The larger the δ value the closer the data realisations are to satisfying the given constraint. The resulting DFI solution is a posterior PDF of the form Equation (7) provides a probabilistic chain rule decomposition of the joint data and parameter posterior PDF as the product of the conditional density of the parameters given the data and the conditional density of the data given constraint F. DFI marginalises the joint (x, θ) posterior over the data space, arriving at a pooled parameter posterior. The procedure involves two steps. The first step consists of sampling the data according to the marginal density pABC(x) using MCMC sampling. In the second step, the consistent Bayesian posteriors, i.e. the above p(θ | x) densities corresponding to consistent MCMC-generated data sets, are pooled, providing an appropriately averaged parametric posterior that represents our best state of knowledge regarding the uncertain parameter vector θ.