Skip to yearly menu bar Skip to main content


Poster

Text-Guided Video Masked Autoencoder

David Fan · Jue Wang · Shuai Liao · Zhikang Zhang · Vimal Bhat · Xinyu Li

# 290
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Recent video masked autoencoder (MAE) works have designed improved masking algorithms focused on saliency. These works leverage visual cues such as motion to mask the most salient regions. However, the robustness of visual cues depends on how often input videos match underlying statistical assumptions. On the other hand, natural language description is an information dense representation of video that implicitly captures saliency without requiring modality-specific assumptions, and has not been explored yet for video MAE. To this end, we introduce a novel text-guided masking strategy (TGM) that masks the video regions with highest correspondence to paired captions. Without leveraging any explicit visual cues for saliency, our text-guided masking is competitive with state-of-the-art masking algorithms such as motion-guided masking. To further benefit from the semantics of natural language for masked reconstruction, we next introduce a unified framework for joint MAE and masked video-text contrastive learning. We show that across existing masking algorithms, unifying MAE and masked video-text contrastive learning improves downstream performance compared to pure MAE on a variety of video recognition tasks, especially for linear probe. When our TGM is combined within this unified framework, we achieve the best relative performance on five action recognition and one egocentric datasets, highlighting the complementary nature of natural language captions for masked video modeling.

Live content is unavailable. Log in and register to view live content