Skip to yearly menu bar Skip to main content


Poster

CoDA: Instructive Chain-of-Domain Adaptation with Severity-Aware Visual Prompt Tuning

ZiYang Gong · FuHao Li · Yupeng Deng · Deblina Bhattacharjee · Xianzheng Ma · Xiangwei Zhu · Zhenming Ji

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Unsupervised Domain Adaptation (UDA) aims to adapt models from labeled source domains to unlabeled target domains. When adapting to adverse scenes, existing UDA methods fail to perform well due to the lack of instructions, leading their models to overlook discrepancies within all adverse scenes. To tackle this, we propose CoDA which instructs models to distinguish, focus, and learn from these discrepancies at scene and image levels. Specifically, CoDA consists of a Chain-of-Domain (CoD) strategy and a Severity-Aware Visual Prompt Tuning (SAVPT) mechanism. CoD focuses on scene-level instructions to divide all adverse scenes into \textit{easy} and \textit{hard} scenes, guiding models to adapt from source to easy domains with easy scene images, and then to hard domains with hard scene images, thereby laying a solid foundation for whole adaptations. Building upon this foundation, we employ SAVPT to dive into more detailed image-level instructions to boost performance. SAVPT features a novel metric \textit{Severity} that divides all adverse scene images into \textit{low-severity} and \textit{high-severity} images. Then Severity directs visual prompts and adapters, instructing models to concentrate on unified severity features instead of scene-specific features, without adding complexity to the model architecture. CoDA achieves SOTA performances on widely-used benchmarks under all adverse scenes. Notably, CoDA outperforms the existing ones by 4.6\%, and 10.3\% mIoU on the Foggy Driving, and Foggy Zurich benchmarks, respectively. We will make our code available upon acceptance.

Live content is unavailable. Log in and register to view live content