Skip to yearly menu bar Skip to main content


Poster

LEROjD: Lidar Extended Radar-Only Object Detection

Patrick Palmer · Martin Krüger · Stefan Schütte · Richard Altendorfer · Ganesh Adam · Torsten Bertram

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Project Page ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Accurate 3D object detection is vital for automated driving perception. While lidar sensors are well suited for this task, they are expensive and have limitations in adverse weather conditions. 3+1D imaging radar sensors offer a cost-effective, robust alternative but face challenges due to their low resolution and high measurement noise. Existing 3+1D imaging radar datasets include both radar and lidar data, enabling cross-modal model improvements. Although lidar shall not be used during inference, it can aid the training of a radar-only object detector. We explore two strategies to transfer knowledge from the lidar to the radar domain and radar-only object detectors: 1. multi-stage training with sequential lidar point cloud thin-out, and 2. cross-modal knowledge distillation. In the multi-stage process, three thin-out methods are examined. Our results show significant performance gains of up to 4.2 percentage points in mean Average Precision with multi-stage training and up to 3.9 percentage points with knowledge distillation by initializing the student with the teacher's weights. The main benefit of these approaches is their applicability to other 3D object detection networks without altering their architecture, as we show by analyzing it on two different object detectors.

Live content is unavailable. Log in and register to view live content