Explore chapters and articles related to this topic
Model Assessment
Published in Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson, Bayesian Thinking in Biostatistics, 2021
Gary L. Rosner, Purushottam W. Laud, Wesley O. Johnson
Let us suppose the expert's best guess for θ under the alternative is , but that they are incredibly unsure about this guess, so they select . Imagine that is observed. The classical two-sided p-value for testing H0 is , so virtually any non-Bayesian would be rejecting the null. However, and consequently as the prior probability of H0 is taken as 0.5. This illustrates the so-called Lindley paradox [224] where a Bayesian, albeit with what we would term a silly prior for any realistic situation, would emphatically accept the null hypothesis while the classical analysis would suggest its strong rejection. The dominant point here is that the result very much depends on the prior specification. If we were to plot the prior for μ, it would be difficult to distinguish it from a constant but improper prior.
Bayes Factor–Based Test Statistics
Published in Albert Vexler, Alan D. Hutson, Xiwei Chen, Statistical Testing Strategies in the Health Sciences, 2017
Albert Vexler, Alan D. Hutson, Xiwei Chen
The use of the posterior mean can reduce sensitivity to variations in the prior and avoid the Lindley paradox in testing point null hypotheses. Based on the use of the posterior mean of the likelihood under each model rather than the usual prior mean, Aitkin (1991) proposed a general procedure for computing Bayes factors for the comparison of arbitrary models. The author stated that the use of the posterior mean has several advantages, including reduced sensitivity to variations in the prior and the avoidance of the Lindley paradox in testing point null hypotheses.
An automatic robust Bayesian approach to principal component regression
Published in Journal of Applied Statistics, 2021
Philippe Gagnon, Mylène Bédard, Alain Desgagné
It is often difficult, in PCR, to specify meaningful priors on the models and their parameters. For this reason, noninformative priors are commonly favoured. The simplest noninformative structure is arguably the improper Jeffreys priors on the parameters of all models, along with a uniform prior on the models. With such a prior structure, one might wonder whether the so-called Jeffreys–Lindley paradox ([24] and [20]), representing inconsistent model selection results, may arise. We show that this is not the case and adopt that structure.