Skip to yearly menu bar Skip to main content


Poster

Plug and Play: A Representation Enhanced Domain Adapter for Collaborative Perception

TIANYOU LUO · Quan Yuan · Yuchen Xia · Guiyang Luo · Yujia Yang · Jinglin Li

# 40
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Sharing intermediate neural features enables agents to effectively see through occlusions. Due to agent diversity, some pioneering works have studied domain adaption for heterogeneous neural features. Nevertheless, these works all partially replace agents’ private neural network with newly trained components, which breaks the model integrity and bidirectional compatibility of agents. In this paper, we consider an open challenge: how to learn non-destructive domain adapters for heterogeneous legacy models to achieve collaborative percepingg while compatible with continually emerging new agent models? To overcome this challenge, we propose the first plug-and-play domain adapter (PnPDA) for heterogeneous collaborative perception. PnPDA builds a semantic calibrator based on contrastive learning to supervise domain gap bridging without destructing the original models. Semantic converter is learned to transform the semantic space of features, while semantic enhancer is utilized to enhance the representation of features. By specifying standard semantics, new models with PnPDA can easily join existing collaborations. Extensive experiments on OPV2V dataset show that PnPDA non-destructively bridges the domain gap and outperforms SOTA by 9.13%.

Live content is unavailable. Log in and register to view live content