Skip to yearly menu bar Skip to main content


Poster

Learning 3D-aware GANs from Unposed Images with Template Feature Field

XINYA CHEN · Hanlei Guo · Yanrui Bin · Shangzhan Zhang · Yuanbo Yang · Yujun Shen · Yue Wang · Yiyi Liao

# 146
[ ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Collecting accurate camera poses of training images has been shown to well serve the learning of 3D-aware generative adversarial networks (GANs) yet can be quite expensive in practice. This work targets learning 3D-aware GANs from unposed images, for which we propose to perform on-the-fly pose estimation of training images with a learned template feature field (TEFF). Concretely, in addition to a generative radiance field as in previous approaches, we ask the generator to also learn a field from 2D semantic features while sharing the density from the radiance field. Such a framework allows us to acquire a canonical 3D feature template leveraging the dataset mean discovered by the generative model, and further efficiently estimate the pose parameters on real data. Experimental results on various challenging datasets demonstrate the superiority of our approach over state-of-the-art alternatives from both the qualitative and the quantitative perspectives. Code and models will be made public.

Live content is unavailable. Log in and register to view live content