Skip to yearly menu bar Skip to main content


Poster

Vary: Scaling up the Vision Vocabulary for Large Vision-Language Models

Haoran Wei · Lingyu Kong · Jinyue Chen · Liang Zhao · Zheng Ge · Jinrong Yang · Jianjian Sun · Chunrui Han · Xiangyu Zhang

[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Most Large Vision-Language Models (LVLMs) enjoy the same vision vocabulary, i.e., CLIP, for common vision tasks. However, for some special task that needs dense and fine-grained perception, the CLIP-style vocabulary may encounter low efficiency in tokenizing corresponding vision knowledge and even suffer out-of-vocabulary problems. Accordingly, we propose Vary, an efficient and productive method to scale up the Vision vocabulary of LVLMs. The procedures of Vary are naturally divided into two folds: the generation and integration of a new vision vocabulary. In the first phase, we devise a vocabulary network along with a tiny decoder-only transformer to compress rich vision signals. In the next, we scale up the vanilla vision vocabulary by merging the new with the original one (CLIP), enabling the LVLMs can effectively garner new features. We present frameworks with two sizes: Vary-base (7B) and Vary-toy (1.8B), both of which enjoy excellent fine-grained perception performance while maintaining great general ability.

Live content is unavailable. Log in and register to view live content