Skip to yearly menu bar Skip to main content


Poster

Motion Mamba: Efficient and Long Sequence Motion Generation

Zeyu Zhang · Akide Liu · Ian Reid · Richard Hartley · Bohan Zhuang · Hao Tang

# 231
[ ] [ Project Page ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Human motion generation stands as a significant pursuit in generative computer vision, while achieving long-sequence and efficient motion generation remains challenging. Recent advancements in state space models (SSMs), notably Mamba, have showcased considerable promise in long sequence modeling with an efficient hardware-aware design, which appears to be a promising direction to build motion generation model upon it. Nevertheless, adapting the SSMs to motion generation faces hurdles since the lack of specialized design architecture for modeling motion sequence. To address these multifaceted challenges, we introduce three key contributions. Firstly, we proposed Motion Mamba, an innovative yet straightforward approach that presents the pioneering motion generation model utilized SSMs. Secondly, we designed a Hierarchical Temporal Mamba (HTM) block to process temporal data by traversing through a symmetric architecture aimed at preserving motion consistency between frames. We also designed a Bidirectional Spatial Mamba (BSM) block to bidirectionally process latent poses, in order to enhance accurate motion generation within a temporal frame. Lastly, the proposed method has outperformed other well-established methods on the HumanML3D and KIT-ML datasets, which demonstrates strong capabilities of high-quality long sequence motion modeling and real-time human motion generation.

Live content is unavailable. Log in and register to view live content