Skip to yearly menu bar Skip to main content


Poster

Surface-Centric Modeling for High-Fidelity Generalizable Neural Surface Reconstruction

Rui Peng · Shihe Shen · Kaiqiang Xiong · Huachen Gao · Jianbo Jiao · Xiaodong Gu · Ronggang Wang

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Generalizable neural surface reconstruction methods have attracted widespread attention due to their superiority in both reconstruction speed and quality, especially in sparse settings. However, existing methods are impeded by the memory constraint or the requirement of ground-truth depths and cannot recover satisfactory geometric details. To this end, we propose \textit{SuRF}, a new Surface-centric framework that incorporates a new Region sparsification based on a matching Field, achieving good trade-offs between performance, efficiency and scalability. To our knowledge, this is the first unsupervised method achieving end-to-end sparsification powered by the introduced matching field, which leverages the weight distribution to efficiently locate the boundary regions containing surface. Instead of predicting an SDF value for each voxel, we present a new region sparsification to sparse the volume through judging whether the voxel is inside the surface region. In this way, our model can exploit higher frequency features around the surface with less memory and computational consumption. Extensive experiments on popular datasets demonstrate that our reconstructions exhibit high-quality details and achieve new state-of-the-art performance. We promise to release our code once the paper is accepted.

Live content is unavailable. Log in and register to view live content