Skip to yearly menu bar Skip to main content


Poster

GRiT: A Generative Region-to-text Transformer for Object Understanding

Jialian Wu · Jianfeng Wang · Zhengyuan Yang · Zhe Gan · Zicheng Liu · Junsong Yuan · Lijuan Wang

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

This paper presents a Generative RegIon-to-Text transformer, GRiT, for object understanding. The spirit of GRiT is to formulate object understanding as pairs, where region locates objects and text describes objects. Specifically, GRiT consists of a visual encoder to extract image features, a foreground object extractor to localize objects, and a text decoder to generate natural language for objects. With the same model architecture, GRiT describes objects via not only simple nouns, but also rich descriptive sentences. We define GRiT as open-set object understanding, as it has no limit on object description output from the model architecture perspective. Experimentally, we apply GRiT to dense captioning and object detection tasks. GRiT achieves new state-of-the-art dense captioning performance (15.5 mAP on Visual Genome) and competitive detection accuracy (60.4 AP on COCO test-dev).

Live content is unavailable. Log in and register to view live content