Skip to yearly menu bar Skip to main content


Poster

PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations

Yang Zheng · Qingqing Zhao · Guandao Yang · Wang Yifan · Donglai Xiang · Florian Dubost · Dmitry Lagun · Thabo Beeler · Federico Tombari · Leonidas Guibas · Gordon Wetzstein

# 293
[ ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Modeling and rendering photorealistic avatars is of crucial importance in many applications. Existing methods that build a 3D avatar from visual observations, however, struggle to reconstruct clothed humans. We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human from multi-view video data along with the physical parameters of the fabric of their clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for spatio-temporal mesh tracking as well as a physically based inverse renderer to estimate the intrinsic material properties. PhysAvatar integrates a physics simulator to estimate the physical parameters of the garments using gradient-based optimization in a principled manner. These novel capabilities enable PhysAvatar to create high-quality novel-view renderings of avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data. This marks a significant advancement towards modeling photorealistic digital humans using physically based inverse rendering with physics in the loop.

Live content is unavailable. Log in and register to view live content