Skip to yearly menu bar Skip to main content


Poster

Monocular Occupancy Prediction for Scalable Indoor Scenes

Hongxiao Yu · Yuqi Wang · Yuntao Chen · Zhaoxiang Zhang

# 313
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Camera-based 3D occupancy prediction has recently garnered increasing attention in outdoor driving scenes. However, research in indoor scenes remains relatively unexplored. The core differences in indoor scenes lie in the complexity of scene scale and the variance in object size. In this paper, we propose a novel method, named ISO, for predicting indoor scene occupancy using monocular images. ISO harnesses the advantages of a pretrained depth model to achieve accurate depth predictions. Subsequently, it employs a Dual Feature Line of Sight Projection(D-FLoSP) module to facilitate the learning of 3D voxel features. Additionally, we introduce a large-scale occupancy benchmark for indoor scenes, titled Occ-ScanNet. With a dataset size 40 times larger than the NYUv2 dataset, it facilitates future scalable research in indoor scene analysis. Experimental results on both NYUv2 and Occ-ScanNet demonstrate that our method achieves state-of-the-art performance. The dataset and code will be made publicly available.

Live content is unavailable. Log in and register to view live content