Skip to yearly menu bar Skip to main content


Poster

MVPGS: Excavating Multi-view Priors for Gaussian Splatting from Sparse Input Views

Wangze Xu · Huachen Gao · Shihe Shen · Rui Peng · Jianbo Jiao · Ronggang Wang

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Recently, the advancement of the Neural Radiance Field (NeRF) has facilitated few-shot Novel View Synthesis (NVS), which presents a significant challenge in 3D vision applications. Despite numerous attempts to reduce the dense input requirement in NeRF, it still suffers from time-consumed training and rendering processes. More recently, 3D Gaussian Splatting (3DGS) achieves real-time high-quality rendering with an explicit point-based representation. However, similar to NeRF, it tends to overfit the train views for a lack of constraints. In this paper, we propose a few-shot NVS method that excavates the multi-view priors based on 3D Gaussian Splatting. We leverage the recent learning-based Multi-view Stereo (MVS) to enhance the quality of geometric initialization for 3DGS. To mitigate overfitting, we propose a forward-warping method for additional appearance constraints conforming to scenes based on the computed geometry. Furthermore, to facilitate proper convergence of optimization, we introduce a view-consistent geometry constraint for Gaussian parameters and utilize a monocular depth regularization as compensation. Experiments show that the proposed method achieves state-of-the-art performance with real-time rendering speed.

Live content is unavailable. Log in and register to view live content