Explore chapters and articles related to this topic
Drought Risk Management: Needs and Experiences in Europe
Published in Donald A. Wilhite, Roger S. Pulwarty, Drought and Water Crises, 2017
Jürgen V. Vogt, Paulo Barbosa, Carmelo Cammalleri, Hugo Carrão, Christophe Lavaysse
Shortage in precipitation drives most drought events. This is why the SPI (McKee et al. 1993) is one of the key indicators for meteorological drought monitoring, as highlighted by the World Meteorological Organization (WMO 2006). The computation of SPI is based on an equiprobability transformation of the probability of observed precipitation into standardized z-score values. Within our systems, SPI is computed on different accumulation periods of n-month (SPI-n, with n = 1, 3, 6, 12, 24, 48 months) by using a reference period of 30 years (1981–2010). In the first step, the accumulated precipitation data are fitted using the gamma probability density function; successively, the fitted cumulative distribution function (cdf) is converted into standardized normal variable values through the standardized normal cdf with null mean and unity variance.
Uncertainty sensitivity assessment on the optimization of the design and operation of complex energy systems
Published in Stein Haugen, Anne Barros, Coen van Gulijk, Trond Kongsvik, Jan Erik Vinnem, Safety and Reliability – Safe Societies in a Changing World, 2018
A. Nadal, A. Ruby, C. Bourasseau, D. Riu, C. Bérenguer
An extensive literature research has been carried out to identify existing, validated or accepted uncertainty probabilistic models for components of energy systems. Table 3 summarizes the different uncertain characteristics of the system components considered in the study, with their associated probability distribution, and with the reference for the chosen uncertainty model. Uniform probability distributions have been associated to the uncertain characteristics of the innovative components (in order to report the equiprobability between the possible values) and to the uncertain parameters of mature components when no other “better” (e.g. from expert judgements) distribution is available.
Modelling and quantification of industry 4.0 manufacturing complexity based on information theory: a robotics case study
Published in International Journal of Production Research, 2019
Dimitris Mourtzis, Sophia Fotia, Nikoletta Boli, Ekaterini Vlachou
In the present study, it is assumed that the Digitalised Manufacturing System towards Industry 4.0 is probabilistic since Shannon’s entropy is calculated only in the context of a probabilistic model. Moreover, it is assumed that the events are equiprobable since it is the worst-case scenario, ones the H takes its maximum value when the probabilities inserted in the logarithm are equal. The equiprobability is a tendency to believe that every process in which randomness is involved, likewise in our case, corresponds to a fair distribution, with equal probabilities for any possible outcome (Bertsekas and Tsitsiklis 2002). The possible outcomes in the examined scenarios could be the various states in a piece of equipment. The equiprobability of the worst case can become more understandable with the following example: if it is considered that in a fair coin toss the H = −log2 (0.52) = 2, while if the toss is not fair and one of the two states is more probable, H = −log2 (1) = 0, meaning no uncertainty. To be more precise regarding the manufacturing system modelling equiprobability assumption, it could be considered the communication system established by the Manufacturing and the Assembly Department. With the transmitted information to refer to the machining operations, the number of variables included in the source depends on the number of operations, which can be performed by the machine tools (assumed 28). Thus, k = 28 (number of operations, which can be performed by the machine tools) and since, these are assumed to be equiprobable, the source entropy is given by bits/message. In particular, considering that the equiprobability is part of Chanel Capacity, the more choices there are, the larger the H (Entropy) will be and therefore the Chanel Capacity. In the above-mentioned communication system, each machine tool used in the manufacturing processes is considered to be a channel. Therefore, the number of channels equals the number of machine tools used in the production process and the channel capacity is different for each machine tool (Chryssolouris, Vassiliou, and Mavrikios 2006).
Understanding 15-year-old students’ conceptions of randomness through their ‘potential worlds’: a qualitative analysis
Published in International Journal of Mathematical Education in Science and Technology, 2021
Zisimos Braessas, Tasos Patronis
There are, epistemically speaking, four different interpretations of the general concept of ‘randomness’ which manifest themselves in teaching at school level (the first three of them have been formulated in Batanero, 2016): Randomness as equiprobability. This view corresponds to classical (or a priori) theory of probability. According to this approach, an object can be seen as ‘randomly’ selected from a set of objects it belongs to, if all the objects of this set are equally probable to be selected. This approach presupposes that some conditions are fulfilled, as for example that the number of objects is finite, as well as that there is some sort of ‘symmetry’ ensuring equiprobability.Randomness as stability of frequencies. Following the frequentist (or a posteriori) approach to the notions of probability we consider an object as a ‘random’ member of a class if we can select it by a method providing a given (constant) relative frequency to each member of this class in the long run.Subjective view of randomness. According to the above two approaches, randomness is an objective property of the selection of an object. According to the subjectivist approach to probability, randomness depends on subjective knowledge. Since Bayes’ theorem tells us that an initial probability can be revised in the light of new information which a person may possess, it follows that probability depends on the person’s knowledge or information [(de Finetti, 1974) quoted by (Batanero, 2016; Kyburg, 1974)].Complexity approach. A mathematical approach of randomness concerning sequences, which is deep in its theoretical details, has been proposed by Kolmogorov and his followers since 1965, by connecting randomness with computational complexity: the more instructions we need to re-create a sequence, the more ‘complex’ and ‘irregular’ or ‘chaotic’ this sequence is (Kolmogorov, 1965).