Explore chapters and articles related to this topic
Literature review and proposed framework
Published in Juan Carlos Chacon-Hurtado, Optimisation of Dynamic Heterogeneous Rainfall Sensor Networks in the Context of Citizen Observatories, 2019
Another information theory objective function used in the design of sensor networks is to minimise the amount of redundant information. Total correlation (Alfonso et al. 2010) is a measure of the amount of redundant information between a set of variables (or sensors on a network) and is quantified as the difference between the sum of all the individual entropies and the joint entropy (Equation 2.26). Consequently, the optimal sensor network is such that minimises the total correlation (TC). TC=∑i=1nH(Xi)−JH(x1,…,xn)
Developing a transinformation based multi-criteria optimization framework for rainfall monitoring networks
Published in Chongfu Huang, Zoe Nivolianitou, Risk Analysis Based on Data and Crisis Response Beyond Knowledge, 2019
Heshu Li, Dong Wang*, Yuankun Wang
In multivariate cases, total correlation is commonly used to measure the shared (overlapped) information. Total correlation is defined as the difference between the joint entropy and the sum of marginal entropy (McGill, 1954; Watanabe, 1960): CX1X2,…,Xd=∑i=1dHXi−HX1X2,…,Xd
Optimized Deep Ensemble Technique for Malicious Behavior Classification in Cloud
Published in Cybernetics and Systems, 2023
V. Murali Mohan, Sukhvinder Singh, Pramod Pandurang Jadhav
Holoentropy-based features: In general, holoentropy is called the “sum of the entropy and the total correlation of the random vector.” The major intention is to handle the disordered data based on the correlation among the attributes. To encourage the performance of information extraction from the provided data, a new modified holoentropy-based feature extraction is introduced. The proposed modified holoentropy-based features introduce the concept of computing the weight factor instead of the existing default weight measure. Thereby, the possibility of training data enhances and the computational risk for the big data classification tends to be lessened. Accordingly, for each feature of the holoentropy is computed by utilizing the below Expression depicted in Eq. (1).
Weighted holoentropy-based features with optimised deep belief network for automatic sentiment analysis: reviewing product tweets
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2023
Hema Krishnan, M. Sudheep Elayidom, T. Santhanakrishnan
Holoentropy features are extracted during the feature extraction process. Holoentropy (Wu & Wang, 2013) is defined as ‘the sum of the entropy and the total correlation of the random vector and can be expressed by the sum of the entropy on all attributes’. It is exploited for determining the level of chaos or disorder in a dataset or to portray the ambiguity of a random parameter. The mathematical formula of holoentropyis specified in Equation (17), in which indicates the random variable as well as and specifies the correlation function and entropy correspondingly.
Blockchain Mechanism-Based Attack Detection in IoT with Hybrid Classification and Proposed Feature Selection
Published in Cybernetics and Systems, 2023
The “holo-entropy is defined as the sum of the entropy and the total correlation of the random vector and can be expressed by the sum of the entropies on all attributes.” Subsequently, it is modeled as in Eqs. (9)–(11), where, “ is the sum of the weighted entropy on each attribute of the random vector ” and it is modeled as in Eq. (10). The improved model, holo-entropy is modeled as in Eqs. (12)–(15), where, refers to Pearson correlation, refers to tuning factor computed using ICMIC chaotic map. ICMIC chaotic map is modeled as in Eq. (16), in which, =2.