Skip to yearly menu bar Skip to main content


Poster

Leveraging temporal contextualization for video action recognition

Minji Kim · Dongyoon Han · Taekyung Kim · Bohyung Han

# 184
[ ] [ Project Page ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Pretrained vision-language models (VLM) have shown effectiveness in video understanding. However, recent studies have not sufficiently leveraged essential temporal information from videos, simply averaging frame-wise representations or referencing consecutive frames. We introduce Temporally Contextualized CLIP (TC-CLIP), a pioneering framework for video understanding that effectively and efficiently leverages comprehensive video information. We propose Temporal Contextualization (TC), a novel layer-wise temporal information infusion mechanism for video that extracts core information from each frame, interconnects relevant information across the video to summarize into context tokens, and ultimately leverages the context tokens during the feature encoding process. Furthermore, Our Video-conditional Prompting (VP) module manufactures context tokens to generate informative prompts in text modality. We conduct extensive experiments in zero-shot, few-shot, base-to-novel, and fully-supervised settings to validate the superiority of our TC-CLIP. Ablation studies for TC and VP guarantee our design choices. Our code will be publicly available.

Live content is unavailable. Log in and register to view live content