Explore chapters and articles related to this topic
Introduction
Published in Andrew Greasley, Simulation Modelling, 2023
Variability can be classified into customer-introduced variability and internal process variability. Customer-introduced variability includes the fact that customers don’t arrive uniformly to a service, and customers will require different services with different service times. Also, not all customers appreciate the same thing in a service; some like self-service and some do not. In addition, variability from internal process variability can arise from internal processes within the organisation; for example, variability in staff performance can be caused by staff (this includes both the variability between different people’s performance and the variability in the execution of process performance by one person over time). Variability can also be caused by equipment and material variations.
Entropy-Based Measurement of Long-Term Precipitation Variability across India
Published in Surendra Kumar Chandniha, Anil Kumar Lohani, Gopal Krishan, Ajay Krishna Prabhakar, Advances in Hydrology and Climate Change, 2023
Prabhash Kumar Mishra, Hemant Singh, Swagatam Das, Surendra Kumar Chandniha
Literally, variability refers to the quality of being subject to variation. In other words, it is the degree of being variable or changeable. Statistically, it is defined as the degree of extent a data point that differs from the mean value. Median and mean represent a distinct value that differs from the actual values. The extent to which these median and mean vary depends upon the variability or dispersion in the original data. Datasets become highly dispersed when they contain values considerably higher and lower than the median or the mean value. There are four commonly used measures of variability: range, mean, variance, and standard deviation. The commonly used statistical measure of variability is the variance, which measures the spread of the data points. Hence, variance is a method to capture the degree of spread of the dataset. Variability may also be measured spatially as well as temporally for a dataset. Spatial variability is represented by different values that are measured at different geographical locations. For example, spatial variability between the network of rain gauges can be investigated for the contribution of precipitation from individual station to the total amount of rainfall received in the region. Temporal variability measures randomness of a data series over different time scales.
Big Data Analytics in Oil and Gas Industry
Published in Anirbid Sircar, Gautami Tripathi, Namrata Bist, Kashish Ara Shakil, Mithileysh Sathiyanarayanan, Emerging Technologies for Sustainable and Smart Energy, 2022
Vrutang Shah, Jaimin Shah, Kaushalkumar Dudhat, Payal Mehta, Manan Shah
The 6Vs that make this BD tool feasible are volume, variety, velocity, veracity, value, and variability (Alguliyev et al., 2017). Petabytes to gigabytes of data are included in O&G. The term “variety” alludes to the fact that data may be found in a number of formats, including structured, unstructured, and semi-structured. Velocity is a term that relates to the rate at which data is created throughout each time step. Veracity refers to the data analysis process. When the data quality exceeds this, the process becomes more efficient. Variability is a term that refers to the changes that occur in data throughout processing and during its existence. All Vs data adds value in this way by exposing the forecasting of possible geological difficulties and detecting failures before to their occurrence. This has significant implications for production, failure detection, efficiency enhancement, health optimisation, and the future trend of O&G in the stock market, among other things. By combining this data with advances in massively parallel computer machines, increased storage capacity, and a new generation of wireless networks, real-time applications such as remote oilfield monitoring have become more practical. Advanced BD analytics technologies can help businesses optimise the output potential of their assets while also addressing performance shortcomings (Brun et al., 2017).
Uncertainty analysis of life cycle assessment of asphalt surfacings
Published in Road Materials and Pavement Design, 2023
Ahmed Abed, Diana Eliza Godoi Bizarro, Luis Neves, Tony Parry, Elisabeth Keijzer, Bjorn Kalman, Ana Jimenez Del Barco Carrion, Konstantinos Mantalovas, Gabriella Buttitta, Davide Lo Presti, Gordon Airey
The importance of incorporating LCI uncertainty and variability has been recognised since the 1990s. Bo Pedersen Weidema and Wesnaes (1996) investigated uncertainty of LCI data and introduced two types of uncertainty, basic uncertainty which is related to the variations of an inventory, and additional uncertainty which is related to data quality and can be assessed using the ‘pedigree matrix’ method. Those researchers concluded that low quality data induces bias and increases the uncertainty in impact estimates. Huijbregts (1998) identified LCI data uncertainty and variability as two factors that can make the outcomes of LCA studies questionable. He defined variability as variations between observations, whereas the uncertainty is associated with the lack of confidence in data resulting from inaccurate measurements of the observations, lack of observation data, or model assumptions. Huijbregts et al. (2001) stated that the lack of data and data inaccuracy are the major sources of uncertainty. They introduced a stochastic framework based on Monte Carlo Simulation (MCS) to incorporate LCI data uncertainty in LCA studies.
Production control of hybrid manufacturing–remanufacturing systems under demand and return variations
Published in International Journal of Production Research, 2019
Vladmir Polotski, Jean-Pierre Kenné, Ali Gharbi
The term variable demand is used in the research literature in a very wide sense: sometimes it used in the context of random demand with the solution that is expected to be optimal in the average sense (e.g. see Hilger, Sahling, and Tempelmeier (2016)). That is different from the sense of the term as it is used in our paper. Variability differs from uncertainty in that the former term means that the value is known or well predicted, but it is evolving in time, while the latter terms means that the value is not precisely known (even it is constant). In our case the demand and return are varying in time and known. This is conceptually close to the problem of ‘Inventory management with advance demand information’ – see for example Wong and Toktay (2008). For both of aforementioned approaches, the manufacturing systems under considerations are fully reliable, contrary to the context of our study.
Robust design optimisation via surrogate network model and soft outer array design
Published in International Journal of Production Research, 2018
A three-level factorial design is chosen herein for the five control factors. The parameter values of the initial design value are assumed to be the second levels to explore the design space centred at the initial design. Herein a minimal orthogonal array of L18 is selected for the control factors. The gain of each treatment of the control factors varies when the design is in production. The causes of variability come from manufacturing errors, variations of parameter, deterioration and uncertainties of operational conditions. The initial noise factors considered in this study include the dimensional errors of the microstructure, the fluctuations of the Young’s modulus of silicon and PZT thin film ΔESi and ΔEPZT, the dielectric constant Δε, the piezoelectric constant Δd31 of PZT film and the operating frequency ω of the acceleration as shown in Table 1. There are 11 noise factors where the operating frequency ω in required bandwidth is treated as four-level because of a wide range with non-linearity, and the rest of noise factors are taken as two-level. A modified L16 orthogonal array with one four-level column and twelve two-level columns is selected for the noise factors to determine the significance. The noise factors are applied to the initial design of accelerometer to estimate the corresponding response of each treatment using the ANSYS simulation. The noise effects plot is shown in Figure 4. The analysis of variance (ANOVA) of the 11 noise factors indicates that the sum of squares (SS) for the operating frequency ω and the manufacturing tolerances of wb, and hm are very low. The sum of squares of these three noise factors are then pooled into the error term. ANOVA for the reduced model of 8 noise factors in Table 2 shows that all the noise factors are significant at the level of 5%. Therefore, the L16 outer array is still selected for the eight two-level factors in Taguchi’s experimental design to estimate the robustness measure.