Skip to yearly menu bar Skip to main content


Poster

General Geometry-aware Weakly Supervised 3D Object Detection

Guowen Zhang · Junsong Fan · Liyi Chen · Zhaoxiang Zhang · Zhen Lei · Yabin Zhang

# 108
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

3D object detection is an indispensable component for scene understanding. However, the annotation of large-scale 3D datasets requires significant human effort. In tackling this problem, many methods adopt weakly supervised 3D object detection that estimates 3D boxes by leveraging 2D boxes and scene/class-specific priors. However, these approaches generally depend on sophisticated manual priors, which results in poor transferability to novel categories and scenes. To this end, we were motivated to propose a general approach, which can be easily transferred to new scenes and/or classes. We propose a unified framework for learning 3D object detectors from weak 2D boxes obtained by associated RGB images. To solve the ill-posed problem of estimating 3D boxes from 2D boxes, we propose three general components, including the prior injection module, the 2D space projection constraint, and the 3D space geometry constraint. We minimize the discrepancy between the boundaries of projected 3D boxes and their corresponding 2D boxes on the image plane. In addition, we incorporate a semantic ratio loss and Point-to-Box alignment loss to refine the pose of estimated 3D boxes. Experiments on Kitti and SUN-RGBD datasets demonstrate that the designed loss can yield surprisingly high-quality 3D bounding boxes with only 2D annotation. Code will be released.

Live content is unavailable. Log in and register to view live content