Skip to yearly menu bar Skip to main content


Poster

Parameterization-driven Neural Surface Reconstruction for Object-oriented Editing in Neural Rendering

Baixin Xu · Jiangbei Hu · Fei Hou · Kwan-Yee Lin · Wayne Wu · Chen Qian · Ying He

[ ] [ Project Page ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

The growing capabilities of neural rendering have increased the demand for new techniques that enable intuitive editing of 3D objects, particularly when they are represented as neural implicit surfaces. In this paper, we present a novel neural algorithm for parameterizing neural implicit surfaces to simple parametric domains, such as spheres and polycubes, thereby facilitating visualization and various editing tasks. Specifically, for polycubes, our method allows the user to specify the desired number of cubes for the domain and then learns a cube configuration that closely resembles the geometry of the target 3D object. It then computes a bi-directional deformation between the object and the domain, utilizing a forward mapping from points on the object's zero level set to the parametric domain, followed by an inverse deformation for backward mapping. To ensure the map is nearly bijective, we employ a cycle loss while optimizing the smoothness of both deformations. The quality of the computed parameterization, as assessed by angle and area distortions, is guaranteed through the use of a Laplacian regularizer and an optimized learned parametric domain. Designed for compatibility, our framework integrates seamlessly with existing neural rendering pipelines, taking multi-view images of a single object or multiple objects of similar geometries as input to reconstruct 3D geometry and compute the corresponding texture map. Our method is fully automatic and end-to-end, eliminating the need for any prior information. We also introduce a simple yet effective technique for intrinsic radiance decomposition, facilitating both view-independent material editing and view-dependent shading editing. Our method allows for the immediate rendering of edited textures through volume rendering, without the need for network re-training. We demonstrate the effectiveness of our method on images of human heads and man-made objects. We will make the source code publicly available.

Live content is unavailable. Log in and register to view live content