Explore chapters and articles related to this topic
Bayesian damage detection on full-scale pole structure with anchor bolt tension loosening
Published in Joan-Ramon Casas, Dan M. Frangopol, Jose Turmo, Bridge Safety, Maintenance, Management, Life-Cycle, Resilience and Sustainability, 2022
where κ = 0, 1. They mean the likely of event occurrence that Ot is observed under the assumption of hypothesizes. In the Bayesian hypothesis test, the Bayes factor defined in Equation 7, which is the ratio of both likelihoods, is used as an indicator. B=pOtH0pOtH1
Handling missing data in large databases
Published in Uwe Engel, Anabel Quan-Haase, Sunny Xun Liu, Lars Lyberg, Handbook of Computational Social Science, Volume 2, 2021
Martin Spiess, Thomas Augustin
The likelihood function is also at the heart of parametric Bayesian inference, where prior knowledge (or ignorance) about model parameters in form of a prior distribution is combined with observed data information via the likelihood function to form the so-called posterior distribution of the parameter. This posterior distribution reflects the knowledge about the parameters of scientific interest in the light of new data and is used to draw inferences. Bayesian inferences, like direct-likelihood inferences, are generally not evaluated from a frequentist perspective but are based on their plausibility or their support from the observed data (Rubin, 1976). To evaluate models, the posterior distribution of the parameter may be inspected and Bayes factors comparing different models can be calculated. There is, however, also a demand to evaluate Bayesian inferences from a frequentist point of view (e.g. Rubin, 1996). In case of direct-likelihood inferences, models are compared via likelihood ratios, that is, relations of likelihood functions based on different models at their respective maximum.
Bayesian Learning Approach
Published in Mark Chang, Artificial Intelligence for Drug Development, Precision Medicine, and Healthcare, 2020
We know that the Bayes’ factor is the ratio of the posterior odds ratio of the hypotheses to the corresponding prior odds ratio. Thus, if the prior probability P (M0) = P (Θ0) and P (M1) = P (Θ1) = 1 − P (Θ0), then ()P(M0|x)={1+1−P(Θ0)P(Θ0)1BF01(x)}−1.
Revisiting human-machine trust: a replication study of Muir and Moray (1996) using a simulated pasteurizer plant task
Published in Ergonomics, 2021
Jieun Lee, Yusuke Yamani, Shelby K. Long, James Unverricht, Makoto Itoh
The current study employed default Bayesian tests (Rouder & Morey, 2012; Rouder et al., 2009; Wetzels et al., 2011) instead of null-hypothesis significance tests (NHSTs). Unlike p-values in the NHSTs, Bayes factors, measures of evidence in the Bayesian tests, are likelihood ratios of a statistical model including an effect of interest against that excluding the effect, which allow evidence for the presence or absence of the effect. That is, the Bayesian analyses can provide statistical evidence for the null hypothesis while the NHSTs, by definition, cannot. Bayes factors also offer more interpretable and meaningful measures of evidence than p-values. For example, a Bayes Factor of 10 favouring an alternative hypothesis means that data are ten times more likely to have arisen from a statistical model including the effect of interest than one excluding it. We use labels to interpret Bayes factors as suggested by Jeffreys (1961) and Wetzels et al. (2011). The current study therefore used Bayesian forms of analyses of variance (ANOVAs), t-tests, and regressions where appropriate.
Discerning excellence from mediocrity in swimming: New insights using Bayesian quantile regression
Published in European Journal of Sport Science, 2021
Tony D. Myers, Yassine Negra, Senda Sammoud, Helmi Chaabene, Alan M. Nevill
The data analysis was conducted in two stages. The first stage involved identifying key predictors across all swimming strokes combined, and the second involved identifying key predictors of each stroke separately. For the first stage of analysis, a saturated Bayesian allometric regression model was fitted with all predictors included. The measurements, used as predictors in the model, included body-mass, height, percentage body-fat and limb dimensions (lengths and girths). In order to determine the best predictors of swimming speed, this initial model was fitted using Jeffrey’s prior on sigma and a Zellner-Siow Cauchy prior on model coefficients. The aim of this being to select the combination of predictors with the highest Bayes Factor (BF). Bayes Factors can be used to identify models with the highest amount of evidence in their favour from the models considered (Kass & Raftery, 1995). Marginal posterior inclusion probabilities (MPIP) were calculated to determine how likely a particular predictor was in the ‘true model’. Bayes Factors, and MPIP for models and variables were calculated using the Bayesian adaptive sampling algorithm described by Merlise, Ghosh, and Littman (Merlise, Ghosh, & Littman, 2011; Merlise, 2018) and implemented using the Bayesian Adaptive Sampling (BAS) package (Merlise, 2018) in R (R Core Team, 2019). Posterior inclusion probabilities greater than 0.5 were included in the model and any predictor with a lower probability was discounted.
Computerized Device Equivalence: A Comparison of Surveys Completed Using A Smartphone, Tablet, Desktop Computer, and Paper-and-Pencil
Published in International Journal of Human–Computer Interaction, 2021
Arne Weigold, Ingrid K. Weigold, Stephanie A. Dykema, Naomi M. Drakeford, Caitlin A. Martin-Wagar
Next, we ran a series of Bayesian ANOVAs, which updated the prior distribution with the knowledge gained from the current study’s data. The degree to which the current data changed our understanding of the phenomenon under question from what was previously known is represented by the Bayes factor. The Bayes factor consists of one number indicating the relative probability of favoring either H0 (no effect is present) or H1 (an effect is present); it is also possible that neither hypothesis is favored (Wagenmakers et al., 2018a). As we expected the Bayes factor to favor H0 for most of our analyses, we reported the factor as BF01, for which higher values indicated a greater probability of H0 being the better predictor (Van Doorn et al., 2019). We used standard methods for interpreting the strength of the Bayes factor such that a BF01 of 1.00 indicates no evidence for either hypothesis, 1.00 to 3.00 indicates anecdotal evidence for H0, 3.00 to 10.00 suggests moderate evidence for H0, 10.00 to 30.00 suggests strong evidence for H0, and above 30.00 shows very strong evidence for H0 (Lee & Wagenmakers, 2013; Wagenmakers et al., 2018b).