Explore chapters and articles related to this topic
Learning Techniques
Published in Peter Wlodarczak, Machine Learning and its Applications, 2019
Active learning is a form of online learning in which the agent acts to acquire useful examples from which to learn [31]. Active learning is an iterative type of supervised learning that is suitable for situations where data are abundant, yet the class labels are scarce or expensive to obtain [10]. It is a special case of learning in which the learner can actively query the user or some other information sources to obtain labels. The idea behind active learning is that the learning algorithm can perform better if it chooses the data it wants to learn from by itself. Since the learner chooses the examples presented to the user, the number of examples can be considerably lower than the number required for supervised learning. Active learning is an approach that helps identify informative unlabeled examples automatically [23].
Multiscale Data Condensation
Published in Sankar K. Pal, Pabitra Mitra, Pattern Recognition Algorithms for Data Mining, 2004
The simplest approach for data reduction is to draw the desired number of random samples from the entire data set. Various statistical sampling methods such as random sampling, stratified sampling, and peepholing [37] have been in existence. However, naive sampling methods are not suitable for real world problems with noisy data, since the performance of the algorithms may change unpredictably and significantly [37]. Better performance is obtained using uncertainty sampling [136] and active learning [241], where a simple classifier queries for informative examples. The random sampling approach effectively ignores all the information present in the samples not chosen for membership in the reduced subset. An advanced condensation algorithm should include information from all samples in the reduction process.
Model-based Design for GPs
Published in Robert B. Gramacy, Surrogates, 2020
Active learning5 is ML jargon for sequential design. Online data selection is sometimes viewed as an example of reinforcement learning6, which is itself a special case of optimal control7. Criteria for sequential data selection, and their solvers, are known as acquisition functions in the active learning literature. In many ways, machine learners have been more “active” in sequential design research of late than statisticians have, so many have adopted their nomenclature. Active learning is where the learning algorithm gets to actively acquire the examples it’s trained on, creating a virtuous cycle between information acquisition and inference.
Active learning for deep object detection by fully exploiting unlabeled data
Published in Connection Science, 2023
Active Learning: Active learning (Parvaneh et al., 2022; Yoo & Kweon, 2019; Yuan et al., 2021) aims to achieve the expected performance of the target model with as few labeled samples as possible, thereby significantly reducing the cost of labeling samples. Generally, active learning first actively selects or generates the most valuable samples through appropriate strategies. Then, the experts annotate these samples and add them to the training data set. The core of active learning is the query strategy. Uncertainty sampling is one of the most common strategies. The idea is to select the sample with the model uncertainty. It mainly includes three methods. The least confidence strategy uses the class score with the highest sample prediction confidence as the sample information, but ignores the distribution information of the remaining class prediction scores. The margin sampling strategy represents the informativeness of a sample by predicting the difference between the two classes with the highest Confidence. The entropy strategy considers the probability distribution of all classes.
Bayesian Convolutional Neural Network-based Models for Diagnosis of Blood Cancer
Published in Applied Artificial Intelligence, 2022
Mohammad Ehtasham Billah, Farrukh Javed
In the future we aim to employ different DL architectures and databases with more samples to better assess the data. For small datasets, like the one used in the study, active learning can be implemented for better classification. In Active Learning, all the data points are not required to be labeled when we start training our model, rather the algorithm starts with very small data and during the learning process, the model itself asks the user (human expert) to label a specific data point if needed. With the Bayesian approach and Active Learning, one can classify the microscopic images of lymphocyte cells and obtain not only the predictions accuracies, but the predictive uncertainty as well, which is more reliable while avoiding overfitting at the same time. Moreover, to further reduce the computational efficiency of the employed approach, Bayesian CNN can be coupled with, such as transfer learning, factorization/decomposition of convolution kernels, and depthwise separable convolutions approaches with more up-to-date visualizations approaches, such as t-sne. In this article, we stick to the most commonly used performance assessment criteria, such as the accuracy, specificity, and sensitivity complemented with overfitting analysis, but other approaches, such as precision and F1 Score can also be implemented to compare performance analysis.
Leveraging wearable technologies to improve test & evaluation of human-agent teams
Published in Theoretical Issues in Ergonomics Science, 2020
Amar Marathe, Ralph Brewer, Bret Kellihan, Kristin E. Schaefer
Another learning-based approach is active learning. Active learning is an iterative semi-supervised learning technique where, at each iteration, an active learner (in this case, a machine learning algorithm such as a support vector machine) identifies maximally-informative data points, queries an oracle for ground-truth labels for those points, and then incorporates that labelled data into the training model for the next iteration (Settles 2010). Active Learning is an ideal approach for learning in situations when the system can detect a need to recalibrate the classifier and can seek out appropriate information needed for adapting the current classifier. In a T&E system, the human may serve as the oracle that provides ground truth information through subjective responses to targeted questionnaires designed to elicit information to improve classification accuracy. While promising in principle, this method has not been proven to work, and thus will be an important focus of our efforts as we develop the proposed system described later.