Explore chapters and articles related to this topic
Estimating Crystallographic Texture and Orientation Statistics
Published in Jeffrey P. Simmons, Lawrence F. Drummy, Charles A. Bouman, Marc De Graef, Statistical Methods for Materials Science, 2019
The information entropy has important theoretical connections to probability theory that are relevant to the discussion of texture estimation. In particular the principle of maximum entropy states that given some set of testable data about a system, the probability distribution that best represents the current state of the system will be the one that maximizes the information entropy. Entropy maximization techniques have been used by Schaeben et al. to solve the pole figure inverse problem [905] and by Böhlke to eliminate “ghosting” in low lmax harmonic texture approximation [111].
Advanced Risk-Based Biodegradation Study Using Environmental Information System and the Holistic Macroengineering Approach
Published in Donald L. Wise, Debra J. Trantolo, Edward J. Cichon, Hilary I. Inyang, Ulrich Stottmeister, Remediation Engineering of Contaminated Soils, 2000
Stergios Dendrou, Basile Dendrou, Mehmet Tumay
At this stage we know only the mean and variance of the contaminant concentration probability distribution. The entire probability distribution is derived by invoking the principle of maximum entropy. The concentration probability distribution is that which maximizes the information entropy subject to the additional constraints imposed by the given information, i.e., the mean and variance (6,7).
Signature Generation Algorithms for Polymorphic Worms
Published in Mohssen Mohammed, Al-Sakib Khan Pathan, Automatic Defense Against Zero-day Polymorphic Worms in Communication Networks, 2016
Mohssen Mohammed, Al-Sakib Khan Pathan
There is another general technique for estimating probability distributions from data: maximum entropy. The main principle in maximum entropy is that when nothing is known, the distribution should be as uniform as possible, that is, have maximal entropy. Labeled training data are used to derive a set of constraints for the model that characterize the class-specific expectations for the distribution.
Personalized travel route recommendation using collaborative filtering based on GPS trajectories
Published in International Journal of Digital Earth, 2018
The entropy-based histogram thresholding method is proposed in this research to determine the time interval threshold. As mentioned, the trajectory segmentation method based on sampling rate (i.e. time interval) is to split the sampling rates as normal or anomaly classes so that we can divide the trajectory when the sampling rate is anomaly. The splitting criterion should tell us which threshold value to use by determining the ‘best’ way to partition the sampling rates into the normal and anomaly classes. The proposed entropy-based histogram thresholding method is based on the principle of maximum entropy (Kapur, Sahoo, and Wong 1985; Wong and Sahoo 1989). Entropy is a well-known measurement for randomness in information theory, and maximum entropy is a general technique for estimating probability distributions from data. The principle of maximum entropy considers that, when nothing is known, the distribution should be as uniform as possible, that is, have the maximum entropy (Nigam, Lafferty, and McCallum 1999). In this study, a histogram about the distribution of GPS sampling rates is built. When no other information is available about the normal and anomaly sampling rates, the principle of maximum entropy considers that the normal and anomaly sampling rates can be best classified when the distribution of normal sampling rates and the distribution of anomaly sampling rates are both as uniform as possible. Specifically, the most reasonable distribution of normal sampling rates and anomaly sampling rates should be the one which are most randomized when no other prior information is available, namely when the summation of the entropy of normal sampling rate distribution and anomaly sampling distribution reaches the maximum.
Modelling uncertainty in reliability growth planning for continuous-use systems utilising disparate source data
Published in Australian Journal of Multi-Disciplinary Engineering, 2019
Paul Nation, Mohammad Modarres
Maximum entropy is the state of a statistical model of least encoded information. If nothing is known about a management strategy or fix effectiveness factor distribution except that it belongs to a certain class (in this case each is modelled using a single beta distribution, and we consider that is all that is known), then the distribution with the largest entropy should be chosen as the least-informative default. Typically, the principle of maximum entropy offers the model with the greatest variation in confidence bounds within the same constraints and the most likely to be representative of real-world outcomes in the presence of uncertainty.
General power laws of the causalities in the causal Bayesian networks
Published in International Journal of General Systems, 2023
Boyuan Li, Xiaoyang Li, Zhaoxing Tian, Xia Lu, Rui Kang
In this work, EI is adopted to quantify the causal effects. Therefore, the causal power laws may generate from the systems under some information rules. In this section, an explanation based on the principle of maximum entropy is proposed. The principle of maximum entropy indicates that the probability distribution representing the current state of knowledge about a system the best is the one with the largest entropy, which provides a concise and conservative solution for complex systems (Jaynes 1957a; 1957b).