Skip to yearly menu bar Skip to main content


Poster

CanonicalFusion: Generating Drivable 3D Human Avatars from Multiple Images

Jisu Shin · Junmyeong Lee · Seongmin Lee · Min-Gyu Park · Jumi Kang · Ju Hong Yoon · HAE-GON JEON

# 277
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

We present a novel framework for reconstructing animatable human avatars from multiple images, termed CanonicalFusion. Our central concept involves integrating individual reconstruction results into the canonical space. To be specific, we first predict Linear Blend Skinning (LBS) weight maps and depth maps using a shared-encoder-dual-decoder network, enabling direct canonicalization of the 3D mesh from the predicted depth maps. Here, instead of predicting high-dimensional skinning weights, we infer compressed skinning weights, i.e., 3-dimensional vector, with the aid of pre-trained MLP networks. We also introduce a forward skinning-based differentiable rendering scheme to merge the reconstructed results from multiple images. This scheme refines the initial mesh by reposing the canonical mesh via the forward skinning and by minimizing photometric and geometric errors between the rendered and the predicted results. Our optimization scheme considers the position and color of vertices as well as the joint angles for each image, thereby mitigating the negative effects of pose errors. We conduct extensive experiments to demonstrate the effectiveness of our method and compare our CanonicalFusion with state-of-the-art methods. Lastly, we provide easy-to-understand insight for the implementation of our work by submitting our source code as supplementary material.

Live content is unavailable. Log in and register to view live content