Skip to yearly menu bar Skip to main content


Poster

Instant 3D Human Avatar Generation using Image Diffusion Models

Nikos Kolotouros · Thiemo Alldieck · Enric Corona · Eduard Gabriel Bazavan · Cristian Sminchisescu

# 235
[ ] [ Paper PDF ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We present AvatarPopUp, a method for fast, high quality 3D human avatar generation from different input modalities, such as images and text prompts and with control over the generated pose and shape. The common theme is the use of diffusion-based image generation networks that are specialized for each particular task, followed by a 3D lifting network. We purposefully decouple generation from 3D modeling which allow us to leverage powerful image synthesis priors, trained on billions of text-image pairs. We fine-tune latent diffusion networks with additional image conditioning to solve tasks such as image generation and back-view prediction, and to support qualitatively different multiple 3D hypotheses. Our partial fine-tuning approach allows to adapt the networks for each task without inducing catastrophic forgetting. In experiments, we demonstrate that our method produces accurate, high-quality 3D avatars with diverse appearance that respect the multimodal text, image, and body control signals. Our approach can produce a 3D mesh in as few as 2 seconds (four orders of magnitude speedup w.r.t. the vast majority of existing methods, most of them solving only a subset of our tasks, and with fewer controls), thus enabling applications that require the controlled 3D generation of human avatar at scale.

Live content is unavailable. Log in and register to view live content