Skip to yearly menu bar Skip to main content


Poster

Navigation Instruction Generation with BEV Perception and Large Language Models

Sheng Fan · Rui Liu · Wenguan Wang · Yi Yang

# 121
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Navigation instruction generation, which requires embodied agents to describe the navigation routes, has been of great interest in robotics and human-computer interaction. Existing studies directly map the sequence of 2D perspective observations to route descriptions. Though straightforward, they overlook the geometric information and object semantics of the 3D environment. To address these challenges, we propose BEVInstructor that incorporates Bird’s Eye View (BEV) features into Multi-Modal Large Language Models (MLLMs) for embodied instruction generation. Specifically, BEVInstructor constructs a Perspective-BEV Visual Encoder to boost the comprehension of 3D environments by fusing the BEV and perspective features. The fused embeddings are served as visual prompts for MLLMs. To leverage the powerful language capabilities of MLLMs, perspective-BEV prompt tuning is proposed for parameter-efficient updating. Based on the perspective-BEV prompts, we further devise an instance-guided iterative refinement pipeline, which improves the instructions in a progressive manner. BEVInstructor achieves impressive performance across diverse datasets (i.e., R2R, REVERIE, and UrbanWalk). Our code will be released.

Live content is unavailable. Log in and register to view live content