Explore chapters and articles related to this topic
Mining Ubiquitous Data Streams for IoT
Published in Ricardo Armentano, Robin Singh Bhadoria, Parag Chatterjee, Ganesh Chandra Deka, The Internet of Things, 2017
Nitesh Funde, Meera Dhabu, Ayesha Khan, Rahul Jichkar
The use of learning methods with FPM-based algorithms is gaining popularity in adaptive systems. For example, incremental learning methods are used at first stage to adapt continuously with concept drift. Second stage includes the discovery of context correlations using adaptive a priori algorithms (Kishore Ramakrishnan et al., 2014). It is important to note that the combination of FPM algorithms with different learning methods is investigated rigorously in resourceful environment, but their usefulness in resource-constrained environments (RCE) is still an unexplored research area. Therefore, there is need of detailed study of performance of FPM-based algorithms to find frequent patterns in RCEs.
Conclusion
Published in Arun Reddy Nelakurthi, Jingrui He, Social Media Analytics for User Behavior Modeling, 2020
Arun Reddy Nelakurthi, Jingrui He
As discussed in the introduction chapter earlier (Chapter 1), social media data is dynamic in nature. Content discussed on the various healthcare-specific social media forums changes from time to time, and the topics of discussion evolve over time which is called as concept drift [Sun et al., 2018]. Concept drift is a common phenomenon in medical informatics, financial data analysis, and social networks. In the existing research, incremental learning, which updates learning machines (models) when a chunk of new training data arrives, is a major learning paradigm for tackling such tasks. In particular, the learning machines should be updated without access to previous data, such that there is no need to store or relearn the model using the previous data. Most research on addressing concept drift can be divided into three categories: (1) using a sliding window technique to train the models and give importance to recent data, (2) modeling for concept drift by considering data chunks at various time intervals, and (3) creating an ensemble of models from consecutive time stamps and build a predictive model as a function of ensemble models. Ensemble models are shown to be better at handling concept drift [Xie et al., 2017]. Our work in adapting off-the-shelf classifiers does not directly address the temporal dynamics involved in social media data. Assuming a number of data chunks D1, …, Dt, with t sequential time steps, existing work on ensemble-based methods is useful in learning t sub-tasks, each of which can be regarded as an adapted model (base learner). These base learners at various time stamps can be further leveraged to address concept drift.
Big Data and IoT Forensics
Published in Vijayalakshmi Saravanan, Alagan Anpalagan, T. Poongodi, Firoz Khan, Securing IoT and Big Data, 2020
As the data coming in IoT is very fast and changing over the time, incremental and ensemble learning methods can be applied to find the useful patterns present in the data. This big stream data collected from IoT devices needs some dynamic learning as it faces the problem of concept drift [47]. Concept drift arises because classification boundaries and cluster centres keep on changing due to high stream data. This problem can be overcome by using incremental as well as ensemble learning techniques. These approaches have the ability to deal with various issues such as limited storage, data availability, and others with IoT technology. The classification or forecasting is done faster when it receives new data using incremental learning. This learning is actually inherent in many traditional machine learning algorithms. These include neural networks, decisions tree-based classifiers, Gaussian, radial basis function neural networks, and the incremental support vector machines [48]. Ensemble algorithms are more apt to incorporate the concept drift problem due to its flexible nature. In fact, nearly every classification algorithm can be used in ensemble learning, while every classification algorithm cannot be used in incremental learning [47]. As a result, incremental algorithms are suggested to be used if the concept drift is smooth or if it is not present. In addition to this, incremental learning is more appropriate when we have a less complex data stream that requires intensive processing in real-time. On the other hand, ensemble approaches can be used to get accurate results when there is huge concept drift or abrupt concept drift. Moreover, these approaches should be used when a complex or unfamiliar distribution of data streams is present.
Current and future role of data fusion and machine learning in infrastructure health monitoring
Published in Structure and Infrastructure Engineering, 2023
Hao Wang, Giorgio Barone, Alister Smith
Future infrastructure health monitoring systems must harness artificial intelligence to automatically interpret measurements throughout an asset’s lifecycle and deliver timely knowledge to users so that safety-critical decisions can be made. Incremental learning offers a solution to this need: it can continuously extend existing knowledge when new training data becomes available gradually over time (i.e. providing life-long learning), while preserving the knowledge obtained from analyses of previous data. Incremental learning also removes the need to store large volumes of data over long periods of time, because the data is no longer needed when the new training is complete (e.g. selecting and storing the most representative new features, while removing the less representative existing features). In contrast to traditional ML approaches where model parameters are frozen, incremental learning allows model parameters to be continuously updated in response to new data, leading to improved performance and an ability to adapt and evolve. This type of technique is also suited to identify trends, which can be particularly useful when monitoring structural performance over long periods of time.
A data-driven scheduling knowledge management method for smart shop floor
Published in International Journal of Computer Integrated Manufacturing, 2022
Yumin Ma, Shengyi Li, Fei Qiao, Xiaoyu Lu, Juan Liu
(2) Use incremental learning to update existing scheduling knowledge. Incremental learning refers to a learning system that can continuously learn new knowledge from new samples and keep most of the previously learned knowledge (Zhu et al. 2018). Relative to re-learning, the cost of learning will be significantly reduced.
Data-driven decision making: new opportunities for DSS in data stream contexts
Published in Journal of Decision Systems, 2022
Nuria Mollá, Ciara Heavin, Alejandro Rabasa
The incremental learning consists of a method where the base of knowledge is constantly updating with the new seen instances. Those examples will be used to evaluate the model and then train it using interleaved test-then-train evaluation. Thus, the model is always being tested on instances never seen before.