Skip to yearly menu bar Skip to main content


Poster

CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning

Erum Mushtaq · Duygu Nur Yaldiz · Yavuz Faruk Bakman · Jie Ding · Chenyang Tao · Dimitrios Dimitriadis · Salman Avestimehr

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Continual self-supervised learning (CSSL) learns a series of tasks sequentially on the unlabeled data. Catastrophic forgetting and task confusion are considered two main challenges of continual learning. While CSSL problem has been studied to address the catastrophic forgetting challenge, little work has been done to address the task confusion aspect. Through extensive experiments, we demonstrate that self-supervised learning (SSL) can make CSSL more susceptible to the task confusion problem, particularly in less diverse settings of class incremental learning because different classes belonging to different tasks are not trained concurrently. Motivated by this challenge, we present a novel cross-model feature Mixup (CroMo-Mixup) framework that addresses this issue through two key components: 1) Cross-Task data Mixup, which mixes samples across tasks to enhance negative sample diversity; and 2) Cross-Model feature Mixup, which learns similarities between embeddings obtained from current and old models of the mixed sample and the original images, respectively, to learn cross-task class contrast, and facilitate old knowledge retrieval. We evaluate the effectiveness of CroMo-Mixup to improve both Task-ID prediction and average linear accuracy across all tasks on three datasets, CIFAR10, CIFAR100, and tinyImageNet under different class-incremental learning settings. We validate the compatibility of CroMo-Mixup on four state-of-the-art SSL objectives.

Live content is unavailable. Log in and register to view live content