Explore chapters and articles related to this topic
An approach to measure interaction in precipitation system under changing environment
Published in Chongfu Huang, Zoe Nivolianitou, Risk Analysis Based on Data and Crisis Response Beyond Knowledge, 2019
Wenqi Wang, Dong Wang, Yuankun Wang
Information theory is proposed and developed by Shannon (1948). For a random variable X with the probability density function (PDF) f(x) defined on the interval [a, b], the marginal entropy of the random variable X can be defined as: H(X)=H(p1,p2,p3,…,pN)=−∑i=1Npilogpi
Information Theory
Published in Jerry D. Gibson, Mobile Communications Handbook, 2017
Emmanuel Abbe, Bixio Rimoldi, Rüdiger Urbanke
The field of information theory has its origin in Claude Shannon's 1948 paper, “A Mathematical Theory of Communication.” Shannon's motivation was to study “[The problem] of reproducing at one point either exactly or approximately a message selected at another point.” In this chapter, we will be concerned only with Shannon's original problem. One should keep in mind that information theory is a growing field of research whose profound impact has reached various areas such as statistical physics, computer science, statistical inference, and probability theory. For an excellent treatment of information theory that extends beyond the area of communication, we recommend Cover and Thomas (1991). For the reader who is strictly interested in communication problems, we also recommend Gallager (1968), Blahut (1987), and McEliece (1977).
A Utility-Based Approach to Information Theory
Published in Craig Friedman, Sven Sandow, Utility-Based Learning from Data, 2016
Information theory provides powerful tools that have been successfully applied in a great variety of diverse fields, including statistical learning theory, physics, communication theory, probability theory, statistics, economics, finance, and computer science (see, for example, Cover and Thomas (1991)). As we have seen in Chapter 3, the fundamental quantities of information theory, such as entropy and Kullback-Leibler relative entropy, can be interpreted in terms of the expected wealth growth rate for a Kelly investor who operates in a complete market. Alternatively, as we shall see below, one can describe these information theoretic quantities in terms of expected utilities for an investor with a logarithmic utility function.
The preliminary selection of oil reservoir in Serbia for carbon dioxide injection and storage by a multicriteria decision-making approach: a case study
Published in Energy Sources, Part A: Recovery, Utilization, and Environmental Effects, 2021
Lola Tomić, Vesna Karović Maričić, Dušan Danilović
The weight coefficient (wi/wj) represents the importance of each criterion, and its determination is a necessary step for ranking alternatives. The sum of the weights of all criteria under consideration is equal to 1. The weight coefficient can be determined objectively or subjectively. In a subjective approach, the decision-maker assigns a weight to each criterion based on experience. For the purpose of avoiding the question of subjectivity in decision-making process, in this paper the entropy method is applied as an objective approach for determining the importance of criteria. In information theory, entropy is a measure of uncertainty degree in system information represented by a probability distribution (Li et al. 2011). For resolving the weights by the entropy method, the normalized decision matrix is used. Entropy value is the amount of normalized decision information. The smaller the entropy value is, the greater is importance of the criteria in the decision-making process, and vice versa.
Research on Coupling Mechanism of China’s Wind Power Industry Chain
Published in International Journal of Green Energy, 2020
With different levels of indicators included in the wind power industrial chain subsystem, the contribution weights to the order degree of the wind power industrial chain subsystem are different. In this paper, the entropy method is used to determine the index weight, which can not only overcome the randomness and conjecture that can not be avoided by the subjective weighting method, but also effectively solve the problem of information overlap between multiple index variables. In information theory, entropy is a measure of information disorder, or uncertainty. When the entropy value of the evaluation object subsystem is large, the disorder degree is high, the coupling utility value is small, and the contribution weight is also small. In this section, according to the principle of entropy value weighting method and on the basis of the comprehensive effect of sample mean value, contribution weight is assigned to the subsystem.
Method towards discovering potential opportunity information during cross-organisational business processes using role identification analysis within complex social network
Published in Enterprise Information Systems, 2020
Wenan Tan, Lu Zhao, Lida Xu, Li Huang, Na Xie
Within the cross-organisational workflow enactment environment, there are multi to multi-connection relations between roles and actors. Moreover, the information on roles could be referenced to select the specific actors through the definition of process. As stated in Section 4.1, it is considered that the information on roles not only contains the information of actors but also includes the information on themselves. Therefore, we utilise the information entropy to measure the amount of information on roles. The information entropy was proposed by CE Shannon, the father of information theory (Shannon 1948). He referred to the ‘information entropy’ as the average amount of information excluded from redundancy drawing on the concept of thermodynamics.