Skip to yearly menu bar Skip to main content


Poster

Eliminating Feature Ambiguity for Few-Shot Segmentation

Qianxiong Xu · Guosheng Lin · Chen Change Loy · Cheng Long · Ziyue Li · Rui Zhao

Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Recent advancements in few-shot segmentation (FSS) have exploited pixel-by-pixel matching between query and support features, typically based on cross attention, which selectively activate query foreground (FG) features that correspond to the same-class support FG features. However, due to the large receptive fields in deep layers of the backbone, the extracted query and support FG features are inevitably mingled with different BG features, impeding the FG-FG matching in cross attention. Hence, the query FG features are fused with less support FG features, i.e., the support information is not well utilized. This paper presents a novel plug-in termed ambiguity elimination network (AENet), which can be plugged into any existing cross attention-based FSS methods. The main idea is to mine discriminative query FG regions to rectify the ambiguous FG features, increasing the proportion of FG information, so as to suppress the negative impacts of the doped BG features. In this way, the FG-FG matching is naturally enhanced. We plug AENet into two baselines CyCTR and SCCAN for evaluation, and their scores are improved by large margins, e.g., the 1-shot performance of SCCAN can be improved by 3.0%+ on both PASCAL-5i and COCO-20i. The source code will be released upon paper acceptance. \keywords{Discriminative prior mask \and Discriminative query regions \and Feature refinement}

Live content is unavailable. Log in and register to view live content