Skip to yearly menu bar Skip to main content


Poster

SAMFusion: Sensor-Adaptive Multimodal Fusion for 3D Object Detection in Adverse Weather

Edoardo Palladin · Roland Dietze · Praveen Narayanan · Mario Bijelic · Felix Heide

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Multimodal sensor fusion is an essential capability for autonomous robots, enabling object detection and decision-making in the presence of failing or uncertain inputs. While recent fusion methods excel in normal environmental conditions, these methods fail in adverse weather conditions, e.g., heavy fog, snow, or obstructions due to soiling. To address these challenges, we introduce a novel multi sensor fusion approach tailored for adverse weather conditions. In addition to RGB and LiDAR sensors employed in recent autonomous driving literature, our sensor fusion stack is capable of learning from NIR Gated camera and radar modalities to tackle low light and adverse weather conditions. We propose to fuse multimodal sensor data through attentive, depth-based blending schemes, with learned refinement in the Bird's Eye View (BEV) domain to combine image and range features. Our detections are predicted by a transformer decoder that weights modalities based on distance and visibility. We validate that our method improves the reliability of multimodal sensor fusion in autonomous vehicles under challenging weather conditions, bridging the gap between ideal conditions and real-world edge cases and improving average precision by 17.6 AP points to the second best method in the pedestrian class in long range dense fog conditions.

Live content is unavailable. Log in and register to view live content