Explore chapters and articles related to this topic
Information Granularity, Information Granules, and Granular Computing
Published in Witold Pedrycz, Granular Computing, 2018
The term information granularity itself has emerged in different contexts and numerous areas of application. It carries various meanings. One can refer to artificial intelligence in which case information granularity is central to a way of problem solving through problem decomposition where various subtasks could be formed and solved individually. Information granules and the area of intelligent computing revolving around them being termed Granular Computing are quite often presented with a direct association with the pioneering studies by Zadeh (1997, 1999, 2005). Zadeh coined an informal yet highly descriptive notion of an information granule. In a general sense, by information granule, one regards a collection of elements drawn together by their closeness (resemblance, proximity, functionality, etc.,) articulated in terms of some useful spatial, temporal, or functional relationships. Subsequently, Granular Computing is about representing, constructing, and processing information granules.
Introduction
Published in Sankar K. Pal, Pabitra Mitra, Pattern Recognition Algorithms for Data Mining, 2004
Granular computing (GrC) may be regarded as a unified framework for theories, methodologies and techniques that make use of granules (i.e., groups, classes or clusters of objects in a universe) in the process of problem solving. In many situations, when a problem involves incomplete, uncertain and vague information, it may be difficult to differentiate distinct elements and one is forced to consider granules. On the other hand, in some situations though detailed information is available, it may be sufficient to use granules in order to have an efficient and practical solution. Granulation is an important step in the human cognition process. From a more practical point of view, the simplicity derived from granular computing is useful for designing scalable data mining algorithms [138, 209, 219]. There are two aspects of granular computing, one deals with formation, representation and interpretation of granules (algorithmic aspect) while the other deals with utilization of granules for problem solving (semantic aspect). Several approaches for granular computing have been suggested in literature including fuzzy set theory [288], rough set theory [214], power algebras and interval analysis. The rough set theoretic approach is based on the principles of set approximation and provides an attractive framework for data mining and knowledge discovery.
Big Data and IoT Forensics
Published in Vijayalakshmi Saravanan, Alagan Anpalagan, T. Poongodi, Firoz Khan, Securing IoT and Big Data, 2020
Granular computing-based machine learning [49] has of late become increasingly well known for its utilization in different big data areas, for example, digital security [53]. It shows numerous favourable results in machine learning, data analysis, recognizing potential patterns, and uncertain reasoning for the enormous amount of data involved in forensics investigations. Granular computing [52] is based on strong foundations or granules, for example, clusters, groups, classes, subsets, or intervals. In this manner, it might be utilized to develop/design an effective computational model for complex and big data applications, for example, data mining, analysis for digital proof, remote sensing, and enormous databases of interactive media and biometrics. This computing gives useful and efficient tools to analyse data at multiple granularities from multiple views in IoT forensics. In addition, various strategies in granular computing have the ability to work efficiently for crisp intelligent systems as well as fuzzy intelligent systems in a dynamic environment. Granular computing empowers analysts to handle intricate issues (for example, evolution of new attributes/objects over the time) concerning streaming data by giving cost-effective solutions. Various concepts of uncertain decision theory, such as random sets, rough sets, and fuzzy sets, can be used in granular computing. Also, these concepts are applicable in different phases of the big data value chain. At the initial stage, it handles uncertainties present in data. After that, it annotates data and lastly provides granular representation of data, which is fed to machine learning algorithms for decision-making purposes. Consequently, these techniques can tackle big data challenges by employing pre-processing approaches or by rebuilding the problem at some granular level. Recently, various studies [50, 51] have developed a granular-based classifier.
Attribute reduction for set-valued data based on D–S evidence theory
Published in International Journal of General Systems, 2022
Granular computing was proposed by Zadeh (1996) and is an important tool in data mining, knowledge representation and artificial intelligence (Zadeh 1996, 1997). The aim is to find an approximation scheme that allows us to observe phenomena of different sizes, so as to effectively solve complex problems. Granule and granular structure are two important concepts in granular computing. Information granules are a collection of objects that are drawn together by some constraints, such as ambiguity, similarity or function. The construction process of information granules is called granulation. A granular structure is a set of information granules in which the inner structure of each information granule can be considered as a substructure. Granular computing mainly studies the structure, representation and interpretation of information granules. To date, there are four main approaches for granular computing: rough set theory (RST) (Pawlak 1982, 1991), fuzzy set theory (Zadeh 1965), concept lattice (Ma et al. 2007; Wu, Leung, and Mi 2009) and quotient space (Zhang and Zhang 2007).
Semi-Supervised Clustering Ensemble Based on Cluster Consensus Selection
Published in Cybernetics and Systems, 2022
Yanxi Liu, Ali Hussein Demin Al-Khafaji
Xu and Ding (2021) used granular computing to develop an ensemble clustering algorithm. Here, granular computing is proposed to deal with overlap, ambiguity and uncertainty in the selection of clustering members. The authors calculate the similarity between members through granularity distance to ensure the quality of the results of individual clustering algorithms. The objective is to select the ensemble members by maximizing the granularity distance, which is useful for improving the accuracy of the final solution. In this method, the correlation matrix is used as a consensus function, which is configured based on the ability of knowledge granularity.