Explore chapters and articles related to this topic
Toward the integration of uncertainty and probabilities in spatial multi-criteria risk analysis
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
Bayesian inference is an alternative to the classical statistical inference. In the latter, also known as frequentist inference, only repeatable events have probabilities, while in Bayesian inference probability describes both epistemic and aleatory uncertainty (e.g. O’Hagan, 2003). Indeed, Bayesian analysis combines data representing the entire like-lihood function with prior knowledge about the parameters, which may come from other data sets or the modeler’s experience and physical intuition (e.g. Reis Jr and Stedinger, 2005). The a priori distribution describes what is known before observing any data, while the likelihood reflects the information about the parameters contained in the data. Parameters estimation is made through the posterior distribution, which is computed using Bayes’ Theorem (e.g. O’Hagan, 2003): () p(y|θ)∝L(y;θ)p(θ)
Evaluation and Incorporation of Uncertainties in Geotechnical Engineering
Published in Chong Tang, Kok-Kwang Phoon, Model Uncertainties in Foundation Design, 2021
There are two distinct “philosophies” of inference: frequentist and Bayesian. The fundamental difference between frequentist and Bayesian approaches is how the concept of probability is interpreted. The frequentist approach defines an event’s probability as the limit of its relative frequency in a large number of trials. From a Bayesian viewpoint, probability is related to the degree of belief about the value of an unknown parameter that is a measure of the plausibility of an event given incomplete knowledge. Frequentist inference is based on sampling theory in which random samples are taken from a population to ascertain the underlying parameters of interest (e.g. mean, COV or correlation). From a frequentist viewpoint, unknown parameters are often assumed to have fixed but unknown values that are not capable of being treated as random variates. Hence, probabilities cannot be associated with these unknown parameters. On the contrary, Bayesian inference assigns probabilities to represent the belief that given values of the parameter are true. While “probabilities” are involved in both approaches for inference, the probabilities are associated with different entities. The result of a Bayesian approach can be a probability distribution for what is known about the parameters, while the result of a frequentist approach is either a “true or false” conclusion from a significance test or a conclusion from a confidence interval. Because it is easy to implement in the computational sense, currently, the frequentist approach dominates the characterization of geotechnical data, as outlined in DNVGL-RP-C207 (DNV 2017).
A computerized hybrid Bayesian-based approach for modelling the deterioration of concrete bridge decks
Published in Structure and Infrastructure Engineering, 2019
Eslam Mohammed Abdelkader, Tarek Zayed, Mohamed Marzouk
The main significant difference between the Bayesian inference and frequentist inference is the capability of the Bayesian inference to include additional information in the form of prior distribution (Rudas, 2008). Kelly et al. (2010) illustrated that the main distinctive feature of the Bayesian inference is its capability to consider information from different sources into the inference model. Thus, the Bayesian inference integrates the old knowledge and the new knowledge into an evidence-based state of knowledge distribution. Bayesian inference is based on interpreting the probability as ‘a rational, and conditional measure of uncertainty’, which nearly matches the sense of the word probability in the ordinary language (Bernardo, 2003).
Development of a nonlinear rutting model for asphalt concrete based on Weibull parameters
Published in International Journal of Pavement Engineering, 2019
A. S. M. Asifur Rahman, Matias M. Mendez Larrain, Rafiqul A. Tarefder
The correlation coefficient r for a sample of a population can be interpreted by its value which ranges between −1 and +1. The r-value close to +1 or −1 infers that the independent variable (material attribute) is strongly related to the dependent variables, positively or negatively. In this current study, the dependent variables are Weibull β and η. On the other hand, the p-value is defined as the probability of obtaining a result equal to or ‘more extreme’ than what was actually observed, when the null hypothesis is true. In frequentist inference, the p-value is widely used in statistical hypothesis testing, specifically in null hypothesis significance testing. Statistical p-value indicates the probability of null hypothesis to be true for a definite population. If the value is very small, the null hypothesis, which actually infers no relation between the variables, is rejected and the alternative hypothesis is accepted to be true. This infers that the variables are significantly related. On the other hand, if the p-value is large enough, we fail to reject the null hypothesis, which infers that, there is no relationship between the tested variables. For example, A small p-value (typically ≤ 0.05) indicates strong evidence against the null hypothesis, therefore, we reject the null hypothesis. On the other hand, a large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis. The significance level, also denoted as alpha or α, is the probability of rejecting the null hypothesis when it is true. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference. Selection of a significance level or α-value depends on the choice of the analyst and the problem we are dealing with. The typical α-values are 0.01, 0.05, or 0.10. However, the analyst or a modeller always can increase this value to incorporate variables that are proved to be less significant by the tests, but are known to be significant from other sources or studies. It should be noted that for both of the linear dependency and significance tests, the independent variables other than which is considered for testing are assumed to be equal. However, for our present case, the ‘all else equal’ condition cannot be kept because of the simultaneous variation of material attributes among all the AC samples. Therefore, spurious results are possible while evaluating linear dependencies between independent and dependent variables.