Skip to yearly menu bar Skip to main content


Poster

InternVideo2: Scaling Foundation Models for Multimodal Video Understanding

Yi Wang · Kunchang Li · Xinhao Li · Jiashuo Yu · Yinan He · Guo Chen · Baoqi Pei · Rongkun Zheng · Jilan Xu · Zun Wang · Yansong Shi · Tianxiang Jiang · SongZe Li · hongjie Zhang · Yifei Huang · Yu Qiao · Yali Wang · Limin Wang

# 190
[ ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We introduce InternVideo2, a new video foundation model (ViFM) that achieves state-of-the-art results in action recognition, video-text tasks, and video-centric dialogue. Our system design includes a progressive approach that unifies the learning of masked video token reconstruction, crossmodal contrastive learning, and next token prediction, scaling up the video encoder size to 6B parameters. At the data level, we prioritize spatiotemporal consistency by semantically segmenting videos and generating video-audio-speech captions. This improves the alignment between video and text. Through extensive experiments, we validate our designs and demonstrate state-of-the-art performance on over 60 out of 74 video and audio tasks. Notably, our model outperforms others on various video-related dialogue and long video understanding benchmarks, highlighting its ability to reason and comprehend longer contexts. Code and models will be released.

Live content is unavailable. Log in and register to view live content