Skip to yearly menu bar Skip to main content


Poster

SLAck: Semantic, Location, and Appearance Aware Open-Vocabulary Tracking

Siyuan Li · Lei Ke · Yung-Hsu Yang · Luigi Piccinelli · Mattia Segu · Martin Danelljan · Luc Van Gool

# 107
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Wed 2 Oct 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Open-vocabulary Multiple Object Tracking (MOT) aims to generalize trackers to novel categories not in the training set. Currently, the best-performing methods are mainly based on pure appearance matching. Due to the complexity of motion patterns in the large-vocabulary scenarios and unstable classification of the novel objects, the motion and semantics cues are either ignored or applied based on heuristics in the final matching steps by existing methods. In this paper, we present a unified framework SLAck that jointly considers location/motion, semantics and appearance priors in the early steps of association and learns how to integrate all valuable information through a lightweight spatial and temporal object graph. Our method eliminates complex post-processing heuristics for fusing different cues and boosts the association performance significantly for large-scale open-vocabulary tracking. Without bells and whistles, we outperform previous state-of-the-art methods significantly for novel classes tracking on the Open-vocabulary MOT and TAO TETA benchmarks. Our code and models will be released.

Live content is unavailable. Log in and register to view live content