Poster
UMERegRobust – Universal Manifold Embedding Compatible Features for Robust Point Cloud Registration
Yuval Haitman · Amit Efraim · Joseph Francos
# 330
Strong Double Blind |
Point cloud registration is a critical component in many vision-based applications, such as perception for autonomous systems. The registration of point cloud observations on a rigid object, or scene, amounts to estimating the rigid transformation relating them. However, in practical scenarios, these observations are often characterized by partial overlap as a result of being acquired from different viewpoints, as well as by different sampling patterns. In this paper, we adopt the Universal Manifold Embedding (UME) framework for the estimation of rigid transformations and extend it, so that it can accommodate scenarios involving partial overlap and differently sampled point clouds. UME is a methodology designed for mapping observations of the same object, related by rigid transformations, into a single low-dimensional linear subspace. This process yields a transformation-invariant representation of the observations, with its matrix form representation being covariant with the transformation. We extend the UME framework by introducing a UME-compatible feature extraction method augmented with a unique UME contrastive loss and a sampling equalizer. These components are integrated into a comprehensive and robust registration pipeline, named UMERegRobust. We propose the RotKITTI registration benchmark, specifically tailored to evaluate registration methods for scenarios involving large rotations. UMERegRobust achieves better than state-of-the-art performance on the KITTI benchmark, especially when strict precision of (1 deg, 10cm) is considered (with an average gain of +9%), and notably outperform SOTA methods on the RotKITTI benchmark (with +45% gain compared the most recent SOTA method.
Live content is unavailable. Log in and register to view live content