Skip to yearly menu bar Skip to main content


Poster

Edge-Guided Fusion and Motion Augmentation for Event-Image Stereo

Fengan Zhao · Qianang Zhou · Junlin Xiong

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Thu 3 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Traditional frame-based cameras have achieved impressive performance in stereo matching, yet challenges remain due to sensor constraints, such as low dynamic range and motion blur. In contrast, event cameras capture per-pixel intensity changes asynchronously with high temporal resolution, making them less prone to motion blur and offering a high dynamic range. However, the event stream provides less spatial information compared to intensity images. Although existing state-of-the-art event-based stereo methods fuse features from both modalities, they still struggle to effectively capture and represent edge details in the scene. In this paper, we propose a novel edge-guided event-image stereo network, which utilizes extra edge cues to supplement edge information during disparity estimation. Firstly, we introduce an edge-guided event-image feature fusion approach to effectively supplement edge information in the fused features. Secondly, we incorporate edge cues into the disparity update process by introducing an edge-guided motion augmentation module, further augmenting the edge information during disparity estimation. Finally, we demonstrate the superiority of our method in stereo matching by conducting experiments on the real-world dataset using joint image and event data.

Live content is unavailable. Log in and register to view live content