Skip to yearly menu bar Skip to main content


Poster

Towards Open Domain Text-Driven Synthesis of Multi-Person Motions

Shan Mengyi · Lu Dong · Yutao Han · Yuan Yao · Tao Liu · Ifeoma Nwogu · Guo-Jun Qi · Mitch Hill

# 296
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

This work aims to generate natural and diverse group motions of multiple humans from textual descriptions. While single-person text-to-motion generation is already extensively studied, it remains challenging to synthesize motions for more than one or two subjects from in-the-wild prompts, mainly due to the lack of available datasets. In this work, we curate human pose and motion datasets by estimating pose information from large-scale image and video datasets. Our models use a transformer-based diffusion framework that accommodates multiple datasets with any number of subjects or frames. Experiments explore both generation of multi-person static poses and generation of multi-person motion sequences. To our knowledge, our method is the first to generate multi-subject motion sequences with high diversity and fidelity from a large variety of textual prompts.

Live content is unavailable. Log in and register to view live content