Explore chapters and articles related to this topic
An Overview of Bayesian Inference
Published in Song S. Qian, Mark R. DuFour, Ibrahim Alameddine, Bayesian Applications in Environmental and Ecological Studies with R and Stan, 2023
Song S. Qian, Mark R. DuFour, Ibrahim Alameddine
A distinct feature of the Bayesian inference is its capability of incorporating existing knowledge in the form of a prior distribution. However, specifying a prior distribution is a highly technical problem. The difficulty is similar to the difficulty in the model specification problem. Specifically, deriving a prior distribution requires knowledge in both statistics (what probability distributions are relevant) and subject matter (how to quantify oft-qualitative knowledge). In a study of fish response to lake acidification in the Adirondack lakes in New York State, Reckhow [1987, 1988] used a logistic model to predict the probability of a lake supporting brook trout (Salvelinus fontinalis) using the lake's average pH level. The prior distributions of the regression model parameters were constructed based on a fisheries expert's answers to questions like the following: Given 100 lakes in the Adirondacks that have brook trout populations in the past, and if all 100 lakes have pH = 5.6 and calcium concentration = 130 μeq/L, what number do you now expect to continue to support the brook trout population?
Network Meta-Analysis
Published in Ding-Geng (Din) Chen, Karl E. Peace, Applied Meta-Analysis with R and Stata, 2021
Bayesian inference involves a process of fitting a probability model to a set of observed data and summarizes the results for the unobserved parameters or unobserved data given the observed data (Gelman 2014). The essential characteristic of Bayesian methods is that the use of probability for quantifying uncertainty in inferences based on statistical data analysis and the process of Bayesian data analysis can be divided into three steps as following: (1) setting up a joint probability model for all observable data and unobservable parameters in a problem; (2) calculating and interpreting the appropriate posterior distribution, which is known as the conditional probability distribution of the unobserved parameters of interest, given the observed data; (3) evaluating the model fitting and the implications of the resulting posterior distribution (Gelman 2014).
Bayesian Methods for Meta-Analysis
Published in Christopher H. Schmid, Theo Stijnen, Ian R. White, Handbook of Meta-Analysis, 2020
Christopher H. Schmid, Bradley P. Carlin, Nicky J. Welton
Scientific knowledge commonly increases by incorporating new evidence to update our beliefs based on a model of previous evidence. The Bayesian approach reflects this learning process. In the Bayesian framework, the parameters Θ of a model are treated as random variables about which we have uncertainty. Bayesian inference seeks to describe our knowledge about the parameters given information available to us coming from the observed data y through the process that generated the data, the likelihood p(y|Θ), and from our prior beliefs about the parameters described by a prior distribution p(Θ). For example, if we have data from a randomized controlled trial (RCT) comparing a new antibacterial product with standard care and reporting the number of patients developing an infection, we might assume that the data are generated through a binomial model with parameters Θ representing the probability of infection in each treatment group. The parameter of interest is usually the difference in these probabilities or some function of them such as their logits for a log odds ratio. Once generated, the data observed are considered as known and fixed (not random). Our knowledge about the parameters conditional on the data observed is expressed by a probability distribution that quantifies our beliefs about the values of these parameters after (or posterior to) the data.
A maximum likelihood estimator for left-truncated lifetimes based on probabilistic prior information about time of occurrence
Published in Journal of Applied Statistics, 2018
Rubén Manso, Rafael Calama, Marta Pardos, Mathieu Fortin
As mentioned before, i until it enters the experiment. Let us denote this birth event as E. In forestry, there are many examples of models that predict this event E as a function of time (E. As a consequence, these density and mass functions are instead interpreted as E. Because no date is more likely than another (i.e. all dates are equally likely to ‘occur’ regardless of any E event), the prior 12). 10) and (11). Note that our approach is fully frequentist, that is, no Bayesian inference is carried out. We only use Bayes' rule as a means to assess the terms 10) and (11).
Bayesian inference for quantum state tomography
Published in Journal of Applied Statistics, 2018
D. S. Gonçalves, C. L. N. Azevedo, C. Lavor, M. A. Gomes-Ruggiero
In conclusion, both Bayesian and bootstrap (resampling) methods circumvent the problems related to the problematic data. Nevertheless, notice that the bootstrap estimates (both punctual and interval ones) are ‘contaminated’, since in some of the bootstrap samples, the MLE estimate corresponds to unacceptable values. In other words, the bootstrap distributions of the estimators present unacceptable values, whereas the respective Bayesian posterior distributions do not. Also, Bayesian inference allows one to incorporate prior information and easily considers other likelihoods, providing a general overview in terms of inference (estimation, model selection, and hypothesis test) through the posterior distribution. In addition, some credible intervals of interest, for example for the purity of a density matrix,
Compound Poisson frailty model with a gamma process prior for the baseline hazard: accounting for a cured fraction
Published in Journal of Applied Statistics, 2022
Maryam Rahmati, Parisa Rezanejad Asl, Javad Mikaeli, Hojjat Zeraati, Aliakbar Rasekhi
There are several model selection criteria available for Bayesian inference. We select the deviance information criterion (DIC3) [10], and log pseudo-marginal likelihood (LPML) [35]. DIC3 is defined as 36]. A smaller value of DIC3 would indicate a better-fitting model. The LPML defined as CPO is given by 35].