Skip to yearly menu bar Skip to main content


Poster

Taming Latent Diffusion Model for Neural Radiance Field Inpainting

Chieh Hubert Lin · Changil Kim · Jia-Bin Huang · Qinbo Li · Chih-Yao Ma · Johannes Kopf · Ming-Hsuan Yang · Hung-Yu Tseng

[ ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images. Despite some recent work showing preliminary success in editing a reconstructed NeRF with diffusion prior, they remain struggling to synthesize reasonable geometry in completely uncovered regions. One major reason is the high diversity of synthetic contents from the diffusion model hinders the radiance field from converging to a crisp and deterministic geometry. Moreover, applying latent diffusion models on real data often yields a textural shift that is incoherent to the image condition due to the auto-encoding error. These two problems are further reinforced with the use of pixel-distance losses. To address these issues, we propose to temper the stochasticity of the diffusion model with per-scene customization and mitigate the textural shift with masked adversarial training. During the analyses, we also found the commonly used pixel and perceptual losses are harmful in the NeRF inpainting task. Through rigorous experiments, our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.

Live content is unavailable. Log in and register to view live content