Explore chapters and articles related to this topic
Fundamentals of Applied Probability
Published in Ephraim Suhir, Human-in-the-Loop, 2018
Normal distribution is not only the most widespread one that is used in many practical problems, but it is also the cornerstone of the entire probability theory. This distribution arises when a random variable results from summation of a large number of independent or weakly dependent random variables comparable from the standpoint of their effect on the scattering of the sum. In other words, the distribution of the sum of a number of random variables converges, under very general conditions, to the normal distribution as the number of variables in the sum becomes large. This remarkable property of the normal distribution is known in the probability theory as the central limit theorem. This theorem states that the sum of many small random effects is normally distributed. The normal law has the maximum entropy (uncertainty) among all the laws of probability distributions of continuous random variables, for which only the expected (mean) value and the variance are known.
Probability
Published in John Bird, Higher Engineering Mathematics, 2017
Bayes’ theorem is one of probability theory (originally stated by the Reverend Thomas Bayes), and may be seen as a way of understanding how the probability that a theory is true is affected by a new piece of evidence. The theorem has been used in a wide variety of contexts, ranging from marine biology to the development of ‘Bayesian’ spam blockers for email systems; in science, it has been used to try to clarify the relationship between theory and evidence. Insights in the philosophy of science involving confirmation, falsification and other topics can be made more precise, and sometimes extended or corrected, by using Bayes’ theorem. Bayes’ theorem may be stated mathematically as:
Chapter 14 Image Processing and Analysis
Published in B H Brown, R H Smallwood, D C Barber, P V Lawford, D R Hose, Medical Physics and Biomedical Engineering, 2017
In probability theory a consequence of the central limit theorem is that if a distribution function is repeatedly convolved with itself the resulting function will become closer and closer to a normal (Gaussian) distribution. If the variance of the distribution function is σ2, the variance of the function produced by convolving N copies of the function is Nσ2. This result applies to repeated smoothing with a filter. Calculate the variance of the 421 filter assuming the pixel size is p mm, and hence calculate the FWHM of the equivalent filter produced by N smoothings with the 421 filter.
On lacunary statistical convergence of double sequences in credibility theory
Published in International Journal of General Systems, 2023
The world is neither random nor fuzzy, but sometimes it can be analyzed by probability theory, sometimes by credibility theory. As a branch of mathematics, probability theory studies the behavior of random phenomena. In the areas of fuzzy set theory, many excellent works have been done, beginning with the work of A. Zadeh (1965). In order to measure a fuzzy event, L. A. Zadeh (1978) also defined the concept of possibility measure in probability measure. Although possibility measure has been widely used, it has no self-duality property. However, a self-dual measure is absolutely needed in both theory and practice. To define a self-dual measure, B. Liu and Liu (2002) introduced the concept of the credibility measure. Additionally, a sufficient and necessary condition for credibility measure was presented by Li and Liu (2006). The credibility measure plays the role of possibility measure in fuzzy world because it shares some fundamental features with possibility measure. Especially, since B. Liu (2006) has begun the survey of credibility theory, a large number of papers have been studied in Chen, Ning, and Wang (2016), B. Das et al. (2020), B. Das et al. (2021), Y. K. Liu and Liu (2002), and Xia (2011).