Learning Transferrable Representations for Unsupervised Domain Adaptation


Supervised learning with large scale labelled datasets and deep layered models has caused a paradigm shift in diverse areas in learning and recognition. However, this approach still suffers from generalization issues under the presence of a domain shift between the training and the test data distribution. Since unsupervised domain adaptation algorithms directly address this domain shift problem between a labelled source dataset and an unlabelled target dataset, recent papers \cite{ganin15, tzeng14} have shown promising results by fine-tuning the networks with domain adaptation loss functions which try to align the mismatch between the training and testing data distributions.

Nevertheless, these recent deep learning based domain adaptation approaches still suffer from issues such as high sensitivity to the gradient reversal hyperparameters \cite{ganin15} and overfitting during the fine-tuning stage. In this paper, we propose a unified deep learning framework where the representation, cross domain transformation, and target label inference are all jointly optimized in an end-to-end fashion for unsupervised domain adaptation. Our experiments show that the proposed method significantly outperforms state-of-the-art algorithms in both object recognition and digit classification experiments by a large margin.

Paper and Source Code

Learning Transferrable Representations for Unsupervised Domain Adaptation
Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese
In Neural Information Processing Systems (NIPS), 2016
[PDF] [Supp.PDF] [GitHub (Coming Soon)]


Technical Queries: Ozan Sener