Skip to yearly menu bar Skip to main content


Poster

DATENeRF: Depth-Aware Text-based Editing of NeRFs

Sara Rojas Martinez · Julien Philip · Kai Zhang · Sai Bi · Fujun Luan · Bernard Ghanem · Kalyan Sunkavalli

[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Recent diffusion models have demonstrated impressive capabilities for text-based 2D image editing. Applying similar ideas to edit a NeRF scene remains challenging as editing 2D frames individually does not produce multiview-consistent results. We make the key observation that the geometry of a NeRF scene provides a way to unify these 2D edits. We leverage this geometry in depth-conditioned ControlNet to improve the consistency of individual 2D image edits. Furthermore, we propose an inpainting scheme that uses the NeRF scene depth to propagate 2D edits across images while staying robust to errors and resampling issues. We demonstrate that this leads to more consistent, realistic and detailed editing results compared to previous state-of-the-art text-based NeRF editing methods.

Live content is unavailable. Log in and register to view live content