Skip to yearly menu bar Skip to main content


Poster

ShapeLLM: Universal 3D Object Understanding for Embodied Interaction

Zekun Qi · Runpei Dong · Shaochen Zhang · Haoran Geng · Chunrui Han · Zheng Ge · Li Yi · Kaisheng Ma

# 185
[ ] [ Project Page ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

This paper presents ShapeLLM, the first 3D Multimodal Large Language Model (LLM) designed for embodied interaction, exploring a universal 3D object understanding with 3D point clouds and languages. ShapeLLM is built upon an improved 3D encoder by extending ReCon to ReCon++ that benefits from multi-view image distillation for enhanced geometry understanding. By utilizing ReCon++ as the 3D point cloud input encoder for LLMs, ShapeLLM is trained on constructed instruction-following data and tested on our newly human-curated benchmark, 3D MM-Vet. ReCon++ and ShapeLLM achieve state-of-the-art performance in 3D geometry understanding and languageā€“unified 3D interaction tasks, such as embodied visual grounding.

Live content is unavailable. Log in and register to view live content