Skip to yearly menu bar Skip to main content


Poster

On Calibration of Object Detectors: Pitfalls, Evaluation and Baselines

Selim Kuzucu · Kemal Oksuz · Jonathan Sadeghi · Puneet Dokania

[ ]
Tue 1 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Building calibrated object detectors is a crucial challenge to address for their reliable usage in safety-critical applications. Recent approaches towards this involve (1) designing new loss functions to obtain calibrated detectors by training them from scratch, and (2) post-hoc Temperature Scaling (TS) that learns to scale the likelihood of a trained detector to output calibrated predictions. These approaches are then evaluated based on a combination of Detection Expected Calibration Error (D-ECE) and Average Precision. In this work, via extensive analysis and insights, we highlight that these recent evaluation frameworks, evaluation metrics, and the use of TS have significant drawbacks leading to incorrect conclusions. As a remedy, we propose a principled evaluation framework to jointly measure calibration and accuracy of object detectors. We also tailor efficient and easy-to-use post-hoc calibration approaches, Platt Scaling and Isotonic Regression, specifically to object detection. As opposed to the common notion, our experiments show that, once designed and evaluated properly, post-hoc calibrators, which are extremely cheap to build and use, are much more powerful and effective than the recent train time calibration methods. To illustrate, D-DETR with our post-hoc Isotonic Regression calibrator outperforms the state-of-the-art Cal-DETR by more than 7 D-ECE on the COCO dataset. We also provide improved versions of Localization-aware ECE and show the efficacy of our method on these metrics as well. Code will be made public.

Live content is unavailable. Log in and register to view live content