Explore chapters and articles related to this topic
Handling missing data in large databases
Published in Uwe Engel, Anabel Quan-Haase, Sunny Xun Liu, Lars Lyberg, Handbook of Computational Social Science, Volume 2, 2021
Martin Spiess, Thomas Augustin
The likelihood function is also at the heart of parametric Bayesian inference, where prior knowledge (or ignorance) about model parameters in form of a prior distribution is combined with observed data information via the likelihood function to form the so-called posterior distribution of the parameter. This posterior distribution reflects the knowledge about the parameters of scientific interest in the light of new data and is used to draw inferences. Bayesian inferences, like direct-likelihood inferences, are generally not evaluated from a frequentist perspective but are based on their plausibility or their support from the observed data (Rubin, 1976). To evaluate models, the posterior distribution of the parameter may be inspected and Bayes factors comparing different models can be calculated. There is, however, also a demand to evaluate Bayesian inferences from a frequentist point of view (e.g. Rubin, 1996). In case of direct-likelihood inferences, models are compared via likelihood ratios, that is, relations of likelihood functions based on different models at their respective maximum.
Determinants of health complaints of Bodetabek commuter workers using Bayesian multilevel logistic regression
Published in Yuli Rahmawati, Peter Charles Taylor, Empowering Science and Mathematics for Global Competitiveness, 2019
When the posterior distribution was difficult to derive mathematically, it was approximated using Markov Chain Monte Carlo (MCMC) (Hox, 2010). MCMC is a simulation technique that can generate random samples from a complex posterior distribution. Through a large number of simulated random samples, it will be possible to calculate the posterior mean, standard deviation, density plot, and quintiles of this distribution (Browne, 2017). In the Bayesian MCMC approach, to test the model fit (goodness of fit), we can compare the Deviance Information Criterion (DIC) from each model. () DIC=D¯+pD
An evaluation method of methodology for integration of HALT, HASS and ADT
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
Tianji Zou, Peng Li, Wei Dang, Kai Liu, Ge Zhang
In this paper, the observed data x (also known as sample information) can be obtained from degradation data of ADT and degradation data of HLAT, HASS can be regarded as prior information. The posterior distribution is the distribution of the parameters after taking into account the observed data, which combines the observed data and the prior information and forms the core of Bayesian inference. With the observed data x, the Bayesian theory suggests that the posterior distribution of parameters can be expressed as: () π(θ|x)=π(θ)⋅p(x|θ)∫Θπ(θ)⋅p(x|θ)dθ
Trip chaining of bicycle and car commuters: an empirical analysis of detours to secondary activities
Published in Transportmetrica A: Transport Science, 2022
Florian Schneider, Winnie Daamen, Serge Hoogendoorn
Table 3 presents the estimated coefficients of the posterior distributions of all main and interaction effects. The posterior distribution represents the uncertainty regarding the effect of a particular variable. The provided lower and upper bounds indicate the values of the 95% credible interval for each value. This means that there is a 95% probability, given the prior and the data, that the population parameter of a particular explanatory variable on the outcome variable lies within this credible interval (Depaoli and van de Schoot 2014). Since we used an uninformative prior, the posterior distribution only depends on the data. As a result, the mean of each posterior distribution approximates the regression coefficient that was estimated with the OLS regression model (see section 4.3). Moreover, insignificant results from the OLS model coincide with high credible intervals in the Bayesian model. The estimated main effects in Table 3 refer to the sample mean, which is expressed via the constant. Conversely, the interaction terms pertain to the corresponding main effect (Nieuwenhuis et al., 2017; te Grotenhuis et al. 2017a). Note that the presented features of the posterior distributions represent the average values of all four estimated models in case that small deviations occurred (e.g. the mean of the posterior distribution of visit varied between 0.84 and 0.86).
Knowledge transfer using Bayesian learning for predicting the process-property relationship of Inconel alloys obtained by laser powder bed fusion
Published in Virtual and Physical Prototyping, 2022
Cuiyuan Lu, Xiaodong Jia, Jay Lee, Jing Shi
Given the prior distributions of the model parameters and the conditional log-likelihood function in Equation (5), the posterior distribution of the model parameters is given as Equation (6) based on the Bayesian theorem. The conditional likelihood is a multivariate Gaussian distribution, i.e. . Given the prior distributions of model parameters , the posterior distribution in Equation (6) will not correspond to any known distribution. Therefore, the MCMC method is used to draw samples from the posterior distribution. The outcome of this MCMC sampling process is given as and is the th independent draw from the posterior distribution.
Understanding regional streamflow trend magnitudes in the Southern Murray-Darling Basin, Australia
Published in Australasian Journal of Water Resources, 2022
Zitian Gao, Danlu Guo, Murray C. Peel, Michael J. Stewardson
The calibration process in each BHM compares the model-simulated with the corresponding observations. To initiate a BHM, the prior distributions of variables and parameters need to be specified (see equations (9), (11) and equation annotations). The prior distribution represents the expected parameter range before they are inferred from any observed data, and the posterior distribution is the final parameter estimated based on observations (calibration). The calibration process involves drawing independent samples from the prior distribution using a Markov chain Monte Carlo (MCMC) technique, and then using maximum likelihood to encourage convergence towards the posterior distribution. We extracted the mean of the posterior distributions for as the region-level trends. The posterior distributions of , , … were also assessed for the influence of each predictor (i.e. catchment characteristics) on regional trends.