Explore chapters and articles related to this topic
Introduction to the Kinetic Theory of Gases
Published in Caroline Desgranges, Jerome Delhommelle, A Mole of Chemistry, 2020
Caroline Desgranges, Jerome Delhommelle
Let us start from the beginning and the Bernoulli distribution (see Figure 1.14). For example, if someone asks a “yes–no” question, what is the probability of getting a “yes” answer? It is 50%, the same as getting a “no” answer, provided that there is no bias here. Now, if we look at a coin toss, we also have a 50% chance that the “heads” side turns up and a 50% chance for the “tails” side to show. Again, this is only true if there is no bias (the coin is not rigged). On the other hand, if the coin is rigged in favor of the “tails” side, the probability of drawing “tails” will be greater than of drawing “heads”. Mathematically, this translates into having a probability p for “tails” such that p > 50% or p > 0.5. For instance, let us choose a coin for which p = 0.6. The Bernoulli distribution tells us that the probability q of drawing “heads” will be q = 1 – p, meaning that q = 0.4. In conclusion, with this rigged coin, we have a 60% chance for a “tails” outcome and a 40% chance for “heads”! In computer science, bits (0 or 1) are the equivalent of “yes–no” questions, laying the groundwork for machine-learning algorithms!
Probability and Statistics
Published in Quamrul H. Mazumder, Introduction to Engineering, 2018
Probability is the likelihood of something happening. In other words, it is the chance that an event will take place. For example, the probability of a coin toss resulting in either heads or tails is 1 (100%), because there are no other options, assuming that the coin lands flat.
P
Published in Splinter Robert, Illustrated Encyclopedia of Applied and Engineering Physics, 2017
[atomic, computational, nuclear, thermodynamics] Branch in mathematical analysis that studies the likelihood of the occurrence of an event. The probability is the specific outcome divided by the total number of events. The probability ranges from 0 to 1, or respectively 0% to 100% likelihood. Specifically, with respect to other events in a probability distribution. The probability is often weighed against a null hypothesis, as proposed in 1931. The null hypothesis is the definition that the event is related to the most-likely phenomenon, at “zero change;” in contrast to the “alternate hypothesis” that everything is different. The most well-known probability is for a coin toss; “heads-tails,” with 50% probability for “heads,” provided a large enough number of tosses are performed. Probability is frequently used to prove or disprove a hypothesis, when the number of occurrences yields a fit less than a certain “p-value,” chosen based on preferences: p < 0.05, p < 0.01, p < 0.005; the smaller the p-value the better the argument that the alternate to the null hypothesis is rejected; hence the event is true in comparison to a standard. The p-value is an indication of the significance of obtaining a test value with statistical magnitude at least as high in comparison as actually observed, under the assumption that the null hypothesis is true. Alternatively, the p-value is an indication of getting a different answer than the null hypothesis; hence a small p-value means low likelihood and null hypothesis stands correct. Probability can be distributed over opportunities. Several probability distributions are available from a mathematical foundation: Poisson distribution, normal distribution, Gaussiandistribution, multinormal distribution, iniform distribution, delta function, bionomial distribution, Bernoulli distribution, gamma distribution, Student’s t-distribution, and several more. Note that the “normal” distribution is not the most likely probability distribution for every phenomenon; the name does not represent the causality. The choice of distribution has to do with the sample size, the phenomenon, and the representation of “outliers” (failed events) (also seeBayesian analysis) (see Figure P.163).
Modelling and quantification of industry 4.0 manufacturing complexity based on information theory: a robotics case study
Published in International Journal of Production Research, 2019
Dimitris Mourtzis, Sophia Fotia, Nikoletta Boli, Ekaterini Vlachou
In the present study, it is assumed that the Digitalised Manufacturing System towards Industry 4.0 is probabilistic since Shannon’s entropy is calculated only in the context of a probabilistic model. Moreover, it is assumed that the events are equiprobable since it is the worst-case scenario, ones the H takes its maximum value when the probabilities inserted in the logarithm are equal. The equiprobability is a tendency to believe that every process in which randomness is involved, likewise in our case, corresponds to a fair distribution, with equal probabilities for any possible outcome (Bertsekas and Tsitsiklis 2002). The possible outcomes in the examined scenarios could be the various states in a piece of equipment. The equiprobability of the worst case can become more understandable with the following example: if it is considered that in a fair coin toss the H = −log2 (0.52) = 2, while if the toss is not fair and one of the two states is more probable, H = −log2 (1) = 0, meaning no uncertainty. To be more precise regarding the manufacturing system modelling equiprobability assumption, it could be considered the communication system established by the Manufacturing and the Assembly Department. With the transmitted information to refer to the machining operations, the number of variables included in the source depends on the number of operations, which can be performed by the machine tools (assumed 28). Thus, k = 28 (number of operations, which can be performed by the machine tools) and since, these are assumed to be equiprobable, the source entropy is given by bits/message. In particular, considering that the equiprobability is part of Chanel Capacity, the more choices there are, the larger the H (Entropy) will be and therefore the Chanel Capacity. In the above-mentioned communication system, each machine tool used in the manufacturing processes is considered to be a channel. Therefore, the number of channels equals the number of machine tools used in the production process and the channel capacity is different for each machine tool (Chryssolouris, Vassiliou, and Mavrikios 2006).