We tackle the problem of source-free unsupervised domain adaptation (SFUDA) for 3D semantic segmentation. This challenging problem amounts to performing domain adaptation on an unlabeled target domain without any access to source data. The only available information is a model trained to achieve good performance on the source domain. Our first analysis reveals a pattern which commonly occurs with all SFUDA procedures: performance degrades after some training time, which is a by-product of an under-constrained and ill-posed problem. We discuss two strategies to alleviate this issue. First, we propose a sensible way to regularize the learning problem. Second, we introduce a novel criterion based on agreement with a reference model. It is used (1) to stop the training and (2) as validator to select hyperparameters. Our contributions are easy to implement and readily amenable for all SFUDA methods, ensuring stable improvements over all baselines. We validate our findings on various settings, achieving state-of-the-art performance.
Live content is unavailable. Log in and register to view live content