Skip to yearly menu bar Skip to main content


Poster

Open-World Dynamic Prompt and Continual Visual Representation Learning

Youngeun Kim · Jun Fang · Qin Zhang · Zhaowei Cai · Yantao Shen · Rahul Duggal · Dripta S. Raychaudhuri · Zhuowen Tu · Yifan Xing · Onkar Dabeer

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

The open world is inherently dynamic, characterized by its ever-evolving concepts and distributions. Continual learning (CL) in this dynamic open-world environment, where knowledge must be continuously acquired from data streams without forgetting, presents a significant challenge. Existing CL methods, whether rehearsal-free or rehearsal-based, often struggle to effectively generalize to unseen test-time classes in this open-world context. To address this challenge, we introduce a new practical CL setting tailored for open-world visual representation learning. In this setting, subsequent data streams systematically introduce novel classes that are disjoint from the classes seen in previous training phases, all the while remaining distinct from the unseen test classes. In response, we introduce Dynamic Prompt and Representation Learner (DPaRL), a simple yet effective Prompt-based CL (PCL) method. Our DPaRL learns to generate dynamic prompts for inference, as opposed to relying on a static prompt pool in previous PCL methods. In addition, DPaRL jointly learns the dynamic prompt generation and discriminative representation at each training stage whereas prior PCL methods only refine the prompt learning throughout the process. Our experimental results demonstrate the superiority of our approach, surpassing state-of-the-art methods on well-established open-world image retrieval benchmarks by an average of 4.7% improvement in Recall@1 performance.

Live content is unavailable. Log in and register to view live content