Explore chapters and articles related to this topic
Conclusion
Published in Arun Reddy Nelakurthi, Jingrui He, Social Media Analytics for User Behavior Modeling, 2020
Arun Reddy Nelakurthi, Jingrui He
Most of the existing research in the past has focused on semi-supervised learning techniques based on transfer learning and domain adaptation that assumed there exists a relevant source domain with abundant data. Usually, it is considered that either the labeled examples or parametric distribution of the labeled examples are known, which are often leveraged to improve the classification performance on the target domain. With increasing popularity in using machine learning for solving real-world issues, a wide range of machine learning tools were employed to build statistical models for data prediction, forecasting, and analysis. Often these tools are trained on large labeled datasets and with extensive human and computational resources. To address this we proposed AOT [Nelakurthi et al., 2018a], a generic framework for source free domain adaptation, which aims to adapt an off-the-shelf classifier to the target domain without having access to the source domain training data. In AOT, we explicitly address the two main challenges, label deficiency, and distribution shift by introducing two residual vectors in the optimization framework. Furthermore, we propose a variant of the iterative shrinkage approach to estimate the residual vectors that converge quickly. Also, the drift in the class distribution is corrected through gradient boosting. The empirical study demonstrates the effectiveness and efficiency of our AOT framework on real-world data sets for text and image classification.
Introduction to Deep Learning
Published in Lia Morra, Silvia Delsanto, Loredana Correale, Artificial Intelligence in Medical Imaging, 2019
Lia Morra, Silvia Delsanto, Loredana Correale
This problem has been studied in several settings in which the most common one assumes that unlabelled data from the target domain is available at training (unsupervised domain adaptation), possibly supplemented by a small sample of labelled target data (supervised or semi-supervised). More recently, researchers have started to tackle the more difficult, but equally compelling scenario, when the tested target domain is not available at training time, usually denoted as domain generalization [108]. In the medical domain, testing on the target domain is technically required for regulatory purposes, but for certain modalities testing all possible variations of acquisition parameters is largely unfeasible and hence robustness to visual domain shift is an important goal for the research community.
Bearing Fault Diagnosis under Different Working Conditions Based on Generative Adversarial Networks
Published in Rui Yang, Maiying Zhong, Machine Learning-Based Fault Diagnosis for Industrial Engineering Systems, 2022
Domain adaptation is a typical method of feature transfer approach in transfer learning. The domain adaptation approach aims at the problem that the source domain and the target domain have a common classification task, but the data distribution between the two domains is inconsistent. The idea of domain adaptation is to apply the knowledge of the source domain to the target classification task through the common features of the two domains. The source domain data is often linked with rich tags, but the target domain data has limited or even no tags. At present, the unsupervised domain adaptation for unlabeled target domain data has become a key research issue in fault diagnosis and classification.
RS-DeepSuperLearner: fusion of CNN ensemble for remote sensing scene classification
Published in Annals of GIS, 2023
Most works have dealt with supervised classification, where RS scenes are classified into predefined categories of one dataset used during training and testing the model. However, more and more research is now focusing on reducing the labelling effort using techniques like domain adaptation and few-shot learning. Domain adaptation aims to transfer training knowledge from a labelled source dataset to a target dataset that has a few labelled samples (semi-supervised) or is completely unlabelled (unsupervised). For example, in recent work by Lasloum et al (Lasloum et al. 2021), a multisource semi-supervised domain adaptation (SSDAN) approach was presented. SDDAN uses a CNN model with a prototypical layer for classification and trains the model on the labelled source dataset, the few labelled target samples, and the remaining unlabelled target samples. The optimized loss function combines the standard cross-entropy loss and an entropy loss computed using the unlabelled data.
GAN-based clustering solution generation and fusion of diffusion
Published in Systems Science & Control Engineering, 2022
Wenming Cao, Zhiwen Yu, Hau-San Wong
Recently, deep clustering approaches (Kamran et al., 2017, October; Wang et al., 2016; Xie et al., 2016; Yang et al., 2016) have been proposed, which can be divided into two stages: learning deep features with auto-encoder or convolutional kernels and employing clustering algorithm on the above learnt features (Caron et al., 2018; Chang et al., 2017, October; Ji et al., 2017, December; Jiang et al., 2017, August; Peng et al., 2016). A similar yet different research field has been deep unsupervised domain adaptation (Ganin & Lempitsky, 2015, July; Long et al., 2015, 2016; Tzeng et al., 2017). The difference between deep clustering and deep unsupervised domain adaptation (Chen et al., 2018; Hu et al., 2018, June; Long et al., 2018; Volpi et al., 2018) lies in that the former has focused on single task, while the latter has utilized the knowledge from the labelled source task to handle the unlabelled target task by using deep shared features or aligning their marginal distributions (Carlucci et al., 2017; Long et al., 2017; Sener et al., 2016; Swami et al., 2017). Furthermore, deep unsupervised domain adaptation (Herath et al., 2017; Luo et al., 2017, December; Murez et al., 2018) has been differentiated from multi-task clustering in that the former has exploited the label information from the source task, while the latter has no label information available in all the tasks.
A linear space adjustment by mapping data into an intermediate space and keeping low level data structures
Published in Journal of Experimental & Theoretical Artificial Intelligence, 2021
Weiqing Fu, Hamid Parvin, Mohammad Reza Mahmoudi, Bui Anh Tuan, Kim-Hung Pho
This paper introduces a linear space mapping for equalising distribution of the data points in the training-data space and distribution of the data points in test-data space in the situation which there are a few labelled samples in test-data space. It is also assumed that all data samples in the training-data space have ground truth class tag. Therefore, it can be viewed as a special case of semi-supervised domain adaptation. In semi-supervised domain adaptation, there are a huge number of labelled training data samples in source domain while a few number of training data samples are from target domain. It can be also considered to be a special case of unsupervised domain adaptation. The proposed method is also highly related to transfer learning. Throughout the whole paper, the symbols are as follows.