Explore chapters and articles related to this topic
Evaluation and Incorporation of Uncertainties in Geotechnical Engineering
Published in Chong Tang, Kok-Kwang Phoon, Model Uncertainties in Foundation Design, 2021
Practicing engineers who deal with soils, rocks and geological phenomena are aware of uncertainties and their impact on design, even if they are not quantified explicitly. Uncertainty and reliability have a long history in geotechnical engineering (Christian 2004). Uncertainties result from different sources involved in the entire geotechnical decision process, as shown in Figure 2.1, taking foundation as an example. These uncertainties can be generalized into three broad categories:Uncertainty in design input parameters. It includes (1) natural variability – all natural soils show variations in properties from point to point in the ground because of inherent variations in composition and consistency during formation (e.g. Lumb 1966; Phoon and Kulhawy 1999a) and (2) transformation uncertainty in empirical models correlating laboratory measurements and in situ test results, such as the standard penetration test (SPT) or cone penetration test (CPT), to design input parameters, such as undrained shear strength or modulus (e.g. Phoon and Kulhawy 1999a; Ching and Phoon 2014).Model uncertainty in design methods arising from imperfect representation of reality because of simplifications and idealizations made in the calculation methods predicting the behaviour of geotechnical structures (e.g. Ronold and Bjerager 1992; Lesny 2017a; Phoon and Tang 2019) and difficulty in considering construction effects fully.Measurement error (also called observational error) is the difference between a measured quantity and its true value – attributed to imperfect instruments, sample disturbance, procedural operator and random testing effects (Phoon and Kulhawy 1999a).These sources are at times inter-related, especially for empirical calculation methods. Nevertheless, it is useful to consider them as being separate because of the degree of control that designers have over the different sources, as outlined in Annex D of the fourth edition of ISO 2394 (ISO 2015). ISO 2394:2015 is intended to be used as a “code for code drafters” in countries where the principles of risk and reliability are used to design and assess the structures over the entire service life (Phoon et al. 2016a). According to the Recommended Practice 207–DNVGL-RP-C207 (DNV 2017), natural variability is classified as aleatory, which cannot be reduced. The transformation and model uncertainty and measurement error are classified as epistemic (DNV 2017), which could be reduced by collecting more data, by improving calculation models and by employing more accurate ways of measurement or testing. The emphasis throughout is on the characterization of model uncertainty that is realistically grounded on a load test database. The reader can refer to standard texts for information on other sources of uncertainties (e.g. Phoon and Ching 2014; Phoon et al. 2016b).
Assessing the extended-range predictability of the ocean model HYCOM with the REMO ocean data assimilation system (RODAS) in the South Atlantic
Published in Journal of Operational Oceanography, 2021
J. P. S. Carvalho, F. B. Costa, D. Mignac, C. A. S. Tanajura
The diagonal co-variance matrix of the observational error depended on the observation type. The SST and SLA data comes with an observational error field. These data were squared to generate the variances in the matrix. In the case of T/S profiles, the observational errors in the model layers were calculated as a function of the depth, according to Mignac et al. (2015). The term is used in the EnOI scheme to tune the magnitude of the analysis increment. The highest value ( = 1) was stablished for SLA assimilation. For assimilation of SST and XBT profiles, = 0.3 and for the remaining T/S profiles, = 0.5.
Calculating confidence intervals for percentiles of accelerated life tests with subsampling
Published in Quality Technology & Quantitative Management, 2019
Guodong Wang, Li Shao, Honggen Chen, Qingan Cui, Shanshan Lv
Freeman and Vining (2010) introduced a two-stage method to take into account the effect of subsampling. Vining (2013) illustrated the concepts of experimental error and observational error in design of experiments. He noted that the observational error is a part of the total experimental error. Vining, Freeman, and Kensler (2015) indicated that the two-stage method is susceptible to bias in the estimation of the shape parameter of Weibull distributions. Furthermore, the two-stage method cannot compute the confidence intervals (CIs) of percentiles. Wang, Niu, and He (2015b) presented an unbiasing factor method to reduce the biases of parameter estimates obtained from Freeman and Vining (2010)’s method. Wang, Niu, Lv, Qu, and He (2016) presented a two-stage bootstrap method to compute the confidence intervals of percentiles. Kensler, Freeman, and Vining (2014) extended the two-stage method to the situation where we have test stands within random blocks.
Quantified Validation with Uncertainty Analysis for Turbulent Single-Phase Friction Models
Published in Nuclear Technology, 2019
Nathan W. Porter, Vincent A. Mousseau, Maria N. Avramova
For each model, the experimental measurements are equal to the model with some observational error , which is generally assumed to be normally distributed with a mean of zero: . In a probabilistic framework, the parameters for each model have associated uncertainty . The parameter estimation process yields estimates of the mean parameter value , the parameter uncertainty , and the variance of the observational error .