Skip to yearly menu bar Skip to main content


Poster

Improving Unsupervised Domain Adaptation: A Pseudo-Candidate Set Approach

Aveen Dayal · Rishabh Lalla · Linga Reddy Cenkeramaddi · C. Krishna Mohan · Abhinav Kumar · Vineeth N Balasubramanian

# 6
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Unsupervised domain adaptation (UDA) is a critical challenge in machine learning, aiming to transfer knowledge from a labeled source domain to an unlabeled target domain. In this work, we aim to improve target set accuracy in any existing UDA method by introducing an approach that utilizes pseudo-candidate sets for labeling the target data. These pseudo-candidate sets serve as a proxy for the true labels in the absence of direct supervision. To enhance the accuracy of the target domain, we propose Unsupervised Domain Adaptation refinement using Pseudo-Candidate Sets (UDPCS), a method which effectively learns to disambiguate among classes in the pseudo-candidate set. Our approach is characterized by two distinct loss functions: one that acts on the pseudo-candidate set to refine its predictions and another that operates on the labels outside the pseudo-candidate set. We use a threshold-based strategy to further guide the learning process toward accurate label disambiguation. We validate our novel yet simple approach through extensive experiments on three well-known benchmark datasets: Office-Home, VisDA, and DomainNet. Our experimental results demonstrate the efficacy of our method in achieving consistent gains on target accuracies across these datasets.

Live content is unavailable. Log in and register to view live content