Explore chapters and articles related to this topic
Bayesian information fusion for non-competing relationship degradation process
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
Junyu Guo, Hong-Zhong Huang, Yan-Feng Li, Jie Zhou, Xiang-Yu Li
From the Equations (10) and (11), the MCMC method is used to estimate the model parameters because of their computational complication. In most actual applications with Bayesian methods, it is difficult to obtain the posterior distribution. The MCMC method is used to construct a Markov chain, the invariant distributionis of the Markov chain is the posterior distribution that is needed to be accurately estimated. In this paper, we assume that the prior distributions of the model parameters are non-informative, and the OpenBUGS is used to perform the Gibbs sampling after the model parameters is estimated.
Bayesian simultaneous prediction intervals and bounds for a finite population
Published in Quality Engineering, 2020
Michael S. Hamada, Brian P. Weaver
Consider a finite population of m items that have a characteristic of interest, Y, such as a physical dimension. We assume that the finite population is a random sample from an infinite superpopulation whose cumulative distribution function (cdf) is where is a vector of parameters, as is done in survey sampling (Lohr 2010). We want a prediction interval that contains k of the finite population y’s, that is, k out of m y’s, or an upper or lower bound that bounds the finite population, that is, k out of m y’s. Suppose that we have data available, a sample of size n from the finite population; we use these data to obtain a posterior distribution for denoted by for data We use OpenBUGS (Spiegelhalter et al. 2014) to obtain draws from the posterior distribution using a Markov chain Monte Carlo (MCMC) algorithm. We then use the posterior distribution draws to calculate empirical simultaneous prediction intervals and bounds.
Experiencing the practice of statistics with simple experiments
Published in Quality Engineering, 2018
Alexandra J. Hamada, Christina A. Hamada, Masaru Hamada, Michael S. Hamada
We expect that new professionals are already facing such challenges as they work on their projects. If they are facing issues that may not have been covered in classes and perhaps not even in the statistical literature, they need to address them and develop solutions that sometimes involves research (Hamada and Sitter 2004); we encourage the new professionals to embrace these challenges as an exciting and creative aspect of the practice of statistics. As a statistician at Los Alamos National Laboratory since 1998, Michael often runs into such challenges; it seems that almost every project has some twist that is new and must be dealt with and many of these have led to publications. Today, Michael pretty much uses only R (R Core Team 2017) and OpenBUGS (the open source version WinBUGS (Lunn et al. 2000) for Bayesian inference) to address these challenges in his practice of statistics.
An incomplete taxonomy of Bayesian models with examples from industrial statistics applications
Published in Quality Engineering, 2020
The Bayesian model that one develops for a data analysis is likely complex so that Markov chain Monte Carlo (MCMC) algorithms will be needed to obtain samples from the posterior distribution (Gelman et al. 2013). Perhaps one of the modern Bayesian computing software packages like JAGS (Plummer 2017), STAN (Stan Development Team 2018) or OpenBUGS (Spiegelharter et al. 2014) might be able to implement the Bayesian model. Otherwise, data analysts will have to code up their own sampler, say a Metropolis-Hastings algorithm (Chib and Greenberg 1995), in R (R Core Team 2018) or some other programing software.