Explore chapters and articles related to this topic
Applications of Physics-Informed Scientific Machine Learning in Subsurface Science: A Survey
Published in Anuj Karpatne, Ramakrishnan Kannan, Vipin Kumar, Knowledge-Guided Machine Learning, 2023
Alexander Y. Sun, Hongkyu Yoon, Chung-Yan Shih, Zhi Zhong
Even though geologic properties are largely static, the boundary and forcing conditions of geosystems are dynamic. A significant challenge is related to adapting ML models trained for one set of conditions or a single site (single domain) to other conditions (multiple domains), with potentially different geometries and boundary/forcing conditions. This problem has been tackled under lifelong learning (see Section 5.2). In recent year, few-shot meta-learning algorithms (Finn, Abbeel, Levine, 2017; Sun et al., 2019b) have been developed to enable domain transferrability. The goal of meta-learning is to train a model on a variety of learning tasks such that the trained model can discover the common structure among tasks (i.e., learning to learn), which is then used to solve new learning tasks using only a small number of training samples (Finn, Abbeel, Levine, 2017). Future GeoML research needs to adapt these new developments to enhance transfer learning across geoscience domains.
A Look at Top 35 Problems in the Computer Science Field for the Next Decade
Published in Durgesh Kumar Mishra, Nilanjan Dey, Bharat Singh Deora, Amit Joshi, ICT for Competitive Strategies, 2020
Deepti Goyal, Amit Kumar Tyagi
The concept of Model Agnostic Meta Learning (MAML) was proposed by Finn et al. (Finn et al., 2017). With the required help from a variety of parameters, a random sampling over well dispersed duties are performed, and a sample is created. With the help of some training samples and iterations, this model can easily adapt the new tasks. Cortes et al. (Cortes et al.) suggested the method of “AdaNet” which learns the network architecture with the knowledge of the network structure and weights. The new network’s kth layer is connected to existing network’s kth and k-1th layers. Wichrowska et al. (Wichrowska et al., 2017) had put forth an able gradient descent optimiser which can speculate to the newly developed tasks with decreased memory size and calibration requirements. They used a hierarchical Recurrent Neural Networks (RNN) architecture in defining the optimiser and it outperformed Adam on Modified National Institute of Standards and Technology (MNIST – a large database of handwritten digits) dataset.
Transformation-based Feature Engineering in Supervised Learning: Strategies toward Automation
Published in Guozhu Dong, Huan Liu, Feature Engineering for Machine Learning and Data Analytics, 2018
It relies on a reward signal for each action taken in order to guide the agent’s behavior, which is different than the supervised learning paradigm, where we are given the ground truth to train the system. Here, we specifically discuss a particular instance of Q-learning with a function approximation method as discussed by Khurana et al. [13]. Q-value learning is a particular kind of reinforcement learning that is useful in the absence of an explicit model of the environment, which in this case translates to the behavior of the learning algorithm. An approximation function is suitable due to the large number of states (recall, millions of nodes in a graph with small depth) for which it is infeasible to learn state-action transitions explicitly. This style of work, where machine learning is aided by the use of other machine learning techniques, is referred to as “learning to learn” or meta-learning.
Current and future role of data fusion and machine learning in infrastructure health monitoring
Published in Structure and Infrastructure Engineering, 2023
Hao Wang, Giorgio Barone, Alister Smith
Ensemble learning algorithms allow the fusion of predictions from multiple weak learners to improve performance. Three categories of algorithms can be used for ensemble learning: bagging, boosting and stacking.Bagging algorithms consist of homogeneous learners that have been trained independently and in parallel. The final decision is then made through the combination of individual decisions via a deterministic averaging process.Boosting algorithms train the homogeneous learners sequentially, with each learner trained on the basis of the previous one.Unlike bagging and boosting algorithms, stacking algorithms train heterogeneous learners in parallel. The combination of weak decisions is achieved by training a meta-model that makes a prediction based on all the individual decisions. In other words, meta-learning algorithms learn how to best use the predictions from other ML algorithms to provide improved decisions.
Meta-modeling of heterogeneous data streams: A dual-network approach for online personalized fault prognostics of equipment
Published in IISE Transactions, 2022
Meta-learning is a machine learning mechanism that aims to teach a software agent to rapidly adapt to different learning tasks using only a few update iterations. In recent years, meta-learning has made great progress in image, audio, and text recognition (Finn et al.2017). It can be expected to solve multi-task learning problems where each task is in a different feature space and there is no single model that can characterize all the learning tasks. In the literature, meta-learning approaches can be divided into two categories. The first category is the memory-based approach, which learns to store the experiences when learning an existing task and generalize them to a new task. For example, Santoro et al. (2016) presented a memory-augmented network to rapidly assimilate new data and leverage this data to make accurate predictions after only a few update iterations. Mishra et al. (2018) proposed a simple and generic meta-learner that can aggregate information from past experience. A handwritten images dataset was used to test the performance of the proposed approach.
Few-Shot traffic prediction based on transferring prior knowledge from local network
Published in Transportmetrica B: Transport Dynamics, 2023
Lin Yu, Fangce Guo, Aruna Sivakumar, Sisi Jian
In the field of machine learning, the mechanism for learning to learn, i.e. Meta-Learning (MAML) has been a state-of-the-art technique that can be applied to solving the few-shot/one-shot learning problems. Finn, Abbeel, and Levine (2017) proposed a meta-learning algorithm which is a model training method for fast adaptation of deep learning networks. This algorithm was evaluated to perform well on a small amount of new data for regression, classification and reinforcement learning. Yao et al. (2019) used the MAML algorithm in a hybrid model combining CNN and LSTM, and then applied it to two tasks: taxi and bike prediction and water quality prediction. Their case studies showed the effectiveness of the proposed model in predicting target data-scarce cities from other cities.