Explore chapters and articles related to this topic
Bayesian Inference in Graphical Gaussian Models
Published in Marloes Maathuis, Mathias Drton, Steffen Lauritzen, Martin Wainwright, Handbook of Graphical Models, 2018
From properties (i)-(iv) above, it follows immediately that, as for the hyper inverse Wishart, one can sample from the IWPG $ IW_{P_G} $ distribution. Moreover, the estimation of the posterior mean of K and Σ $ \Sigma $ can be done explicitly. Since the WPG $ W_{P_G} $ is a natural exponential family, the posterior mean of K is obtained by differentiating the cumulant generating function of its distribution. Due to the distributional properties and the independences in (iii) and (iv) of the theorem above, it is also straightforward to derive the expected value of Σ $ \Sigma $ (see Theorem 3.1 of [40]) through the expected value of each one of the Cholesky elements listed in (iv). Each E(ΣRi·Si|z1,…,zn) $ E(\Sigma _{R_i\cdot S_i}|z^1,\ldots ,z^n) $ is a linear combination of θRi·Si $ \theta _{R_i\cdot S_i} $ and uRi·Si $ u_{R_i\cdot S_i} $ . So, the posterior mean is a shrinkage estimate of Σ $ \Sigma $ and through the use of (α,β) $ (\alpha ,\beta ) $ we can shrink different layers of the Cholesky decomposition with different intensity.
Tweedie hidden Markov random field and the expectation-method of moments and maximisation algorithm for brain MR image segmentation
Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2023
Mouna Zitouni, Masmoudi Afif, Mourad Zribi
The function is called the cumulant function of EDMs. It is strictly convex, infinitely differentiable and its differential is given by called the mean of . The mapping from the parameter to the mean is invertible, so we may write where is the inverse function of In addition, is called the variance function of natural exponential family F. A new parametrisation is called the mean parametrisation of reproductive EDMs and the probability density function can be expressed as