Explore chapters and articles related to this topic
Preliminary study on television media hot spot mining
Published in Amir Hussain, Mirjana Ivanovic, Electronics, Communications and Networks IV, 2015
Huiyi Li, Zhanbin He, Han Long
where the parameter α is a K-vector with components αk1>0, and Γ(x) is the Gamma function. The optimizing values of the variational parameters are obtained by minimizing the KL divergence between the variational distribution and the true posterior. For details of calculation, refer to (Blei et al. 2003). The Gibbs sampling is a form of Monte Carlo Markov chain (Gilks &Richardson 1996). The sampling is done sequentially. It proceeds until the sampl values approximate the target distribution. For details, refer to (Griffiths & Steyvers 2004).
Monte Carlo Methods for Statistical Signal Processing
Published in Erchin Serpedin, Thomas Chen, Dinesh Rajan, Mathematical Foundations for SIGNAL PROCESSING, COMMUNICATIONS, AND NETWORKING, 2012
The Gibbs sampler’s popularity in statistics community stems from its extensive use of conditional distributions in each iteration. The data augmentation method [12] first linked the Gibbs sampling structure with missing data problems and the EM-type algorithms. The Gibbs sampler was further popularized by [13] where it was pointed out that the conditionals needed in Gibbs iterations are commonly available in many Bayesian and likelihood computations. Under regularity conditions, one can show that the Gibbs sampler chain converges geometrically and its convergence rate is related to how the variables correlate with each other [14]. Therefore, grouping highly correlated variables together in the Gibbs update can greatly speed up the sampler.
Introduction
Published in Sugato Basu, Ian Davidson, Kiri L. Wagstaff, Constrained Clustering, 2008
Sugato Basu, Ian Davidson, Kiri L. Wagstaff
Equation (4.12), collectively for all i, are the mean field equations. Evaluation of mean field equations requires at most O(NnM) time complexity, which is the same as the time complexity of one Gibbs sampling pass. Successive updates of equation (4.12) will converge to a local optimum of equation (4.11). In our experiments, the convergence usually occurs after about 20 iterations, which is much less than the number of passes required for Gibbs sampling.
Superposed Poisson process models with a modified bathtub intensity function for repairable systems
Published in IISE Transactions, 2021
Tao Yuan, Tianqiang Yan, Suk Joo Bae
In each iteration of Gibbs sampling, a value can be computed for a function of the model parameters, e.g., the intensity function or the mean function After M iterations of the Gibbs sampling procedure, the values drawn for a model parameter or a function of the model parameters can be regarded as a random sample from its marginal posterior distribution. Sample statistics can be used to construct posterior point and interval estimates. The posterior means or medians can be used as point estimates, and the Bayesian interval estimates can be constructed using sample percentiles. For example, a 95% Bayesian credible interval can be formulated using the 2.5th and 97.5th sample percentiles.
Postauditing and Cost Estimation Applications: An Illustration of MCMC Simulation for Bayesian Regression Analysis
Published in The Engineering Economist, 2019
The other popular algorithm used in MCMC simulation is the Gibbs sampling algorithm (Chun 2008; Geman and Geman 1984). It is a special case of the Metropolis algorithm and samples from the posterior conditional distributions of the parameters.1 The advantage of the Gibbs sampling algorithm is that it does not require an instrumental proposal distribution like the Metropolis algorithm (SAS/STAT 9.2 User’s Guide). In the Gibbs sampling algorithm, each model parameter is updated one at a time according to its full conditional distribution. Consequently, for the Gibbs sample to work, it should be possible for the joint posterior distribution to be decomposed into a full conditional distribution for each parameter in the multidimensional model. Gibbs sampling is highly efficient, but it tends to be slow when the parameters are highly correlated in the target distribution (Gelman et al. 2014). This problem may be fixed by parameterization or combining with other algorithms. Integration with the MH algorithm provides the hybrid Metropolis algorithm with Gibbs updates. The hybrid algorithm is quite popularly used in MCMC software.
A model comparison algorithm for increased forecast accuracy of dengue fever incidence in Singapore and the auxiliary role of total precipitation information
Published in International Journal of Environmental Health Research, 2018
Yew-Meng Koh, Reagan Spindler, Matthew Sandgren, Jiyi Jiang
In any application of Bayesian methods, it is important to check that the MCMC chains used in Gibbs sampling have converged. This convergence was confirmed in three ways in this paper: by checking history plots of the samples to ensure proper mixing of the chains, using iterations only after mixing had been confirmed (we used draws from the 500,000th iteration onward in our analysis), and observing autocorrelation plots of the samples to ensure low levels of autocorrelation between the samples. More details on convergence checking can be obtained from Cowles (2013).