Explore chapters and articles related to this topic
Function Estimation
Published in M. Necati Özisik, Helcio R. B. Orlande, Inverse Heat Transfer, 2021
M. Necati Özisik, Helcio R. B. Orlande
Improper priors do not pose difficulties for the application of the Metropolis-Hastings algorithm, since the normalizing constants of such densities are cancelled when α(P*|P(t)) is computed with equation (4.2.3). On the other hand, the above priors include additional parameters that need to be specified for the application of MCMC methods, like γ for the Gaussian smoothness prior and the TV prior, or the parameters α and l in Matérn’s covariance matrix (6.1.3). The specification of values for such parameters can be made by numerical experiments by using simulated experimental data that serve as a reference for the inverse problem under analysis. However, within the Bayesian framework, if a parameter is not known it can be regarded as part of the inference problem leading to the use of hierarchical or hyperprior models, as described below.
Toward the integration of uncertainty and probabilities in spatial multi-criteria risk analysis
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
An extra level is added to the standard Bayesian theorem for the hierarchical Bayesian approach. In this level, the parameter (θ) can be described with a distribution, which is conditional on the hyperparameter (φ). The distribution of this hyperparameter (p(φ)) is a hyperprior distribution. Therefore, when estimating the posterior for the parameter (θ), information from the hyperprior is used in addition to the information from the prior and likelihood.
Automated variance modeling for three-dimensional point cloud data via Bayesian neural networks
Published in IISE Transactions, 2023
Zhaohui Geng, Arman Sabbaghi, Bopaya Bidanda
Thus, similar to the work of Ferreira et al. (2019), our algorithm utilizes a BELM with the previously described scaled sampling strategy for the input weights to model the relationships between the local geometric descriptors and the variances for landmarks in point cloud data. For each landmark we let denote its predictors as in Equation (2). For each hidden neuron we draw the input weights and ωq based on our scaled sampling strategy. We let H be the k × m matrix whose (i, q) entry is Our variance model is then where is the vector of the natural logarithms of the landmark variances for coordinate is the vector of output weights that are to be inferred, and ϵj is the vector of error terms that are independent and identically distributed random variables with unknown variance Our prior probability density function on the unknown parameters is and our hyperprior distribution on is the Inverse-Gamma density The joint posterior distribution of the unknown parameters can be calculated in a straightforward manner using a Gibbs sampling algorithm (Geman and Geman, 1984).
Accessibility for maintenance in the engine room: development and application of a prediction tool for operational costs estimation
Published in Ship Technology Research, 2022
Paola Gualeni, Fabio Perrera, Mattia Raimondo, Tomaso Vairo
Suppose having a random variable X, with parameter θ, following a certain distribution (X|θ). Furthermore, the parameter θ has its own distribution, which is supposed characterized by mean µ. In this case, the parameter µ is called hyperparameter, while its distribution is hyperprior. Suppose another stage is included, for instance. In that case, µ follows a distribution (µ|s), with s as mean, this can also be called hyperparameter, and its distribution is a hyperprior distribution.