Skip to yearly menu bar Skip to main content


Poster

Enhancing Source-Free Domain Adaptive Object Detection with Low-confidence Pseudo Label Distillation

Ilhoon Yoon · Hyeongjun Kwon · Jin Kim · Junyoung Park · Hyunsung Jang · Kwanghoon Sohn

# 55
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Source-free domain adaptation for Object Detection (SFOD) is a promising strategy for deploying trained detectors to new, unlabeled domains without accessing source data, addressing significant concerns around data privacy and efficiency. Most SFOD methods leverage a conventional Mean-Teacher (MT) self-training paradigm relying heavily on High-confidence Pseudo-Labels (HPL). However, these HPL often overlook objects that are unfamiliar across domains, leading to biased adaptation towards objects familiar to the source domain. To address this limitation, we introduce the Low-confidence Pseudo Label Distillation (LPLD) loss within the Mean-Teacher based SFOD framework. This novel approach is designed to leverage the proposals from Region Proposal Network (RPN), which potentially encompasses hard-to-detect objects in unfamiliar domains. Initially, we extract HPL using a standard pseudo-labeling technique and mine a set of Low-Confidence Pseudo Labels (LPL) from proposals generated by RPN, leaving those that do not overlap significantly with HPL. These LPL are further refined, and a LPLD loss is calculated to leverage class-relation information and reduce the effect of inherent noise. Furthermore, we use feature distance to adaptively weight the LPLD loss to focus on LPL containing more foreground area. Our method outperforms all other SFOD counterparts on four cross-domain object detection benchmarks. Extensive experiments demonstrate that our LPLD loss leads to effective adaptation by reducing false negatives and facilitating the use of general knowledge from the source model. Code is available at https://github.com/AnonymousPaperSource/paper11254.

Live content is unavailable. Log in and register to view live content