Skip to yearly menu bar Skip to main content


Poster

SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction

Marko Mihajlovic · Sergey Prokudin · Siyu Tang · Robert Maier · Federica Bogo · Tony Tung · Edmond Boyer

# 328
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Digitizing static scenes and dynamic events from multi-view images has long been a challenge in the fields of computer vision and graphics. Recently, 3D Gaussian Splatting has emerged as a practical and scalable method for reconstruction, gaining popularity due to its impressive quality of reconstruction, real-time rendering speeds, and compatibility with widely used visualization tools. However, the method requires a substantial number of input views to achieve high-quality scene reconstruction, introducing a significant practical bottleneck. This challenge is especially pronounced in capturing dynamic scenes, where deploying an extensive camera array can be prohibitively costly. In this work, we identify the lack of spatial autocorrelation as one of the factors contributing to the suboptimal performance of the 3DGS technique in sparse reconstruction settings. To address the issue, we propose an optimization strategy that effectively regularizes splat features by modeling them as the outputs of a corresponding implicit neural field. This results in a consistent enhancement of reconstruction quality across various scenarios. Our approach adeptly manages both static and dynamic cases, as demonstrated by extensive testing across different setups and scene complexities.

Live content is unavailable. Log in and register to view live content