Skip to yearly menu bar Skip to main content


Poster

ProtoComp: Diverse Point Cloud Completion with Controllable Prototype

Xumin Yu · Yanbo Wang · Jie Zhou · Jiwen Lu

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Point cloud completion aims to reconstruct the geometry of partial point clouds captured by various sensors. Traditionally, training a point cloud model is carried out on synthetic datasets, which have limited categories and deviate significantly from real-world scenarios. This disparity often leads existing methods to struggle with unfamiliar categories and severe incompleteness in real-world situations. In this paper, we propose \textbf{PrototypeCompletion}, a novel prototype-based approach for point cloud completion. It begins by generating rough prototypes and subsequently augments them with additional geometry details for the final prediction. With just a few hundred pairs of partial-complete point cloud data, our approach effectively handles the point clouds from diverse scenarios in real-world situations, including indoor ScanNet and outdoor KITTI. Additionally, we propose a new metric and test benchmark based on ScanNet200 and KITTI to evaluate the model's performance in real-world scenarios, aiming to promote future research. Experimental results demonstrate that our method outperforms state-of-the-art methods on existing PCN benchmark and excels in various real-world situations with different object categories and sensors. The code will be made available.

Live content is unavailable. Log in and register to view live content