Skip to yearly menu bar Skip to main content


Poster

U-COPE: Taking a Further Step to Universal 9D Category-level Object Pose Estimation

li zhang · Weiqing Meng · Yan Zhong · Bin Kong · Mingliang Xu · Jianming Du · Xue Wang · Rujing Wang · Liu Liu

# 177
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Rigid and articulated objects are common in our daily lives. Pose estimation tasks for both types of objects have been extensively studied within their respective domains. However, a universal framework capable of estimating the pose of both rigid and articulated objects has yet to be reported. In this paper, we introduce a Universal 9D Category-level Object Pose Estimation (U-COPE) framework, designed to address this gap. Our approach offers a novel perspective on rigid and articulated objects, redefining their pose estimation problems to unify them into a common task. Leveraging either 3D point cloud or RGB-D image inputs, we extract Point Pair Features ~(PPF) independently from each object part for end-to-end learning. Moreover, instead of direct prediction as seen in prior art, we employ a universal voting strategy to derive decisive parameters crucial for object pose estimation. Our network is trained end-to-end to optimize three key objectives: Joint Information, Part Segmentation, and 9D pose estimation through parameter voting. Extensive experiments validate the robustness of our method in estimating poses for both rigid and articulated objects, which demonstrates the generalizability to unseen object instances, too. Notably, our approach achieves state-of-the-art performance on synthetic datasets and real-world datasets. Our code will be publicly available soon.

Live content is unavailable. Log in and register to view live content