Skip to yearly menu bar Skip to main content


Poster

Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information

Luca Di Giammarino · Boyang Sun · Giorgio Grisetti · Marc Pollefeys · Hermann Blum · Daniel Barath

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Accurate localization in diverse environments is a fundamental challenge in computer vision and robotics. The task involves determining a sensor's precise position and orientation, typically a camera, within a given space. Traditional localization methods often rely on passive sensing, which may struggle in scenarios with limited features or dynamic environments. In response, this paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy. Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications. Our results demonstrate that our method performs better than the existing one target similar problems, generalizing on synthetic and real data. We also release an open-source implementation to benefit the community.

Live content is unavailable. Log in and register to view live content