Explore chapters and articles related to this topic
Thermal Nanosensors
Published in Vinod Kumar Khanna, Nanosensors, 2021
How is the device used in heat measurements? The addendum heat capacity of the nanocalorimeter is less than 2 × 10−7 J K−1 at room temperature and 2 × 10−10 J K−1 at 2.3 K. Heat capacities of several Cu and Au films were measured. The heat capacities of thin Cu and Au films have been reported and agree with bulk values. These measurements showed that the nanocalorimeter can be used to measure the heat capacities of films as thin as 30 nm with absolute accuracy (<5%), limited by a combination of electrical noise (a random fluctuation in an electrical signal, a characteristic of all electronic circuits), film thickness uncertainty, and ≤2% systematic error from the measurement technique. A systematic error is a type of error that deviates by a fixed amount from the true value of measurement; this commonly occurs with the measuring instrument having an offset or zero setting error, multiplier, or scale factor error or by the wrong use of the instrument or by changes in the environment during the experiment.
Improving the efficiency of petroleum transport systems by operative monitoring of oil flows and detection of illegal incuts
Published in Vladimir Litvinenko, Topical Issues of Rational Use of Natural Resources 2019, 2019
A.V. Kopteva, V.V. Starshaya, V.I. Malarev, V.Yu. Koptev
The systematic component of the measurement error of the channels of direct and scattered radiation is due to the nonlinearity of the output signals of the detecting units caused by inaccuracies in the initial calibration of the physical characteristics of the monitored flows. Systematic errors are also caused by a number of other factors affecting the accuracy and stability of the calibration characteristics, in particular, the aging of system components, electronics drift, changes in ambient temperature, decrease in radiation intensity, etc. To minimize the influence of these factors, the algorithm for automatic adjustment of calibration characteristics was developed and presented in Figure 3, according to which the measured and calculated RMI readings are averaged over a minimum time interval of the order of 5 ms.
Geometric point cloud quality
Published in Belén Riveiro, Roderik Lindenbergh, Laser Scanning, 2019
TLS measurements are affected by systematic, gross and random errors. Systematic errors are repeatable phenomena (biases) that arise due to imperfect instrument manufacture or assembly and environmental conditions. In principle, these errors can be modelled and corrections applied to remove or reduce their effect on the observations. For some error sources, such as non-orthogonality of instrumental axes, the physical cause is apparent and corresponding functional models are readily developed. For others, such as multi-path reflections, a universal correction model does not exist. Such errors are considered gross errors (or outliers or blunders) and must be identified and removed either manually or semi-automatically. For TLS scan registration, the ability to detect gross errors is maximized with strong geometric network design. For manual point cloud editing, expert knowledge and/or local statistical measures are used. The third category is random errors, which cannot be predicted. Their behaviour is described with a stochastic model in terms of moments of their probability density function. See Mikhail and Ackermann (1976), for example, for more details.
Uncertainty modeling and applications for operating data-driven inverse design
Published in Journal of Engineering Design, 2023
Shijiang Li, Liang Hou, Zebo Chen, Shaojie Wang, Xiangjian Bu
The data acquisition process involves measurement uncertainty. Because of the high volume and diversity of products, data acquisition instruments must meet the requirements of low cost and customisability; however, this results in reduced acquisition accuracy. An instrument is used to measure a certain parameter and obtain the output result . Owing to the error of the instrument itself and the interference of the environment and personnel, errors are expected between the output result and the real value. As shown in Figure 2, there are three main types of errors: systematic error , random error , and gross error, . Systematic error is caused by the limitations of the measurement instrument or method, and its variation pattern is often predictable. Random error is due to factors in the measurement process (e.g. environmental noise) and is unpredictable. The gross error results from significant deviations from the true value. These may arise because of factors such as the carelessness of the testing personnel, and must be avoided during the measurement process (Fan et al. 2017). Uncertainty of the design model
Gene expression programming-based approach for predicting the roller length of a hydraulic jump on a rough bed
Published in ISH Journal of Hydraulic Engineering, 2021
Hamed Azimi, Hossein Bonakdari, Isa Ebtehaj
First of all, the experimental uncertainty should be considered in computing the total uncertainty in the GEP models’ prediction of the roller length of a hydraulic jump on a rough bed. Experimental measurement accuracy depends on two major sources: human error and equipment error. Equipment error (systematic error) is measurable and its source can be found. However, it is very arduous to find the source of human error, which is also known as random error. To prevent human error during the experiments, the measurements were repeated and the averages were taken as the final values. According to Carollo et al. (2007, 2009), the roller length was equal to the horizontal distance between the jump toe and the roller end section, which was located by visualizing the characteristic stagnation point with a float. They measured the flow depths in the channel with a point gauge. In this way, human error would be negligible. Moreover, the point gauge accuracy was ±0.1 mm in Carollo et al.’s (2007, 2009) work. According to the range of flow depths applied in the proposed models (0.00142 to 0.1636 m), the experimental measurement error obtained is between 0.06% and 7%, with an average of 3.53%. On the other hand, the mean error of the GEP results is 7.9%. Therefore, the total uncertainty between the GEP outputs and the measured values is around 7.9% + 3.53% = 11.43%.
Statistical arguments towards the development of an advanced embrittlement correlation method for reactor pressure vessel materials
Published in Journal of Nuclear Science and Technology, 2020
Toshiki Nakasuji, Kazunori Morishita
It is known from a statistical point of view that there are two types of errors: one is random error, and the other is systematic error. Systematic error produces the fluctuation of data values, depending on such parameters as the neutron flux, temperature, chemical composition, etc., whereas random error is statistical variability that inevitably appears and generally obeys the normal probability distribution. According to refs [21,22], it is reported that the average behavior of approximately 400 residual data in JEAC 4201 do not depend on any irradiation or material conditions such as the neutron flux, temperature, and chemical composition. Rather, it may be more accurate to state that the coefficients of the rate theory equations have been determined so that dependencies do not appear to the extent possible. Regardless, this may indicate that dependencies on those conditions are all successfully included within a description of the rate theory equations. Thus, the residuals only contain the component of random errors. The residual data defined above appear to show only the behavior of random noise, and hence, it is hereinafter assumed that the residual can be well described by the normal probability distribution. This characteristic of the residual data is very useful later when considering the correction method used to improve the accuracy of the prediction.