Poster
CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
Avinash Paliwal · Wei Ye · Jinhui Xiong · Dmytro Kotovenko · Rakesh Ranjan · Vikas Chandra · Nima Khademi Kalantari
# 319
The field of 3D reconstruction from images has rapidly evolved in the past few years, first with the introduction of Neural Radiance Field (NeRF) and more recently with 3D Gaussian Splatting (3DGS). The latter provides a significant edge over NeRF in terms of fast training and real-time inference while improving the reconstruction quality. Although the current 3DGS approach works well for dense input images, the unstructured point-cloud like representation quickly overfits to the more challenging setup of sparse training images (e.g., 3 images), creating a representation that appears as a jumble of needles from novel views. We propose to solve this issue by regularized optimization and depth-based initialization. Specifically, we optimize the Gaussian blobs to smoothly and independently deform different object surfaces to compensate for the inaccuracies of the initialization by utilizing an implicit convolutional decoder and a total variation loss. To support our regularized optimization, we initialize a 3D Gaussian representation from each input view through a novel technique that utilizes monocular depth. We demonstrate significant improvements in terms of recovering scene geometry and texture compared to state-of-the-art sparse-view NeRF-based approaches on a variety of scenes.
Live content is unavailable. Log in and register to view live content