Skip to yearly menu bar Skip to main content


Poster

Train Till You Drop: Towards Stable and Robust Source-free Unsupervised 3D Domain Adaptation

Bjoern Michele · Alexandre Boulch · Tuan Hung Vu · Gilles Puy · Renaud Marlet · Nicolas Courty

# 73
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We tackle the problem of source-free unsupervised domain adaptation (SFUDA) for 3D semantic segmentation. This challenging problem amounts to performing domain adaptation on an unlabeled target domain without any access to source data. The only available information is a model trained to achieve good performance on the source domain. Our first analysis reveals a pattern which commonly occurs with all SFUDA procedures: performance degrades after some training time, which is a by-product of an under-constrained and ill-posed problem. We discuss two strategies to alleviate this issue. First, we propose a sensible way to regularize the learning problem. Second, we introduce a novel criterion based on agreement with a reference model. It is used (1) to stop the training and (2) as validator to select hyperparameters. Our contributions are easy to implement and readily amenable for all SFUDA methods, ensuring stable improvements over all baselines. We validate our findings on various settings, achieving state-of-the-art performance.

Live content is unavailable. Log in and register to view live content