Skip to yearly menu bar Skip to main content


Poster

Context-Guided Spatial Feature Reconstruction for Efficient Semantic Segmentation

Zhenliang Ni · Xinghao Chen · Yingjie Zhai · Yehui Tang · Yunhe Wang

# 136
[ ] [ Project Page ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Semantic segmentation is an important task for many applications but it is still quite challenging to achieve advanced performance with limited computational costs. In this paper, we present CGRSeg, an efficient yet competitive segmentation framework based on context-guided spatial feature reconstruction. In it, a Rectangular Self-Calibration Module is carefully designed for spatial feature reconstruction and pyramid context extraction. It captures the global context in both horizontal and vertical directions and gets the axial global context to explicitly model rectangular key areas. A shape self-calibration function is designed to make the key areas more close to the foreground object. Besides, a lightweight Dynamic Prototype Guided head is proposed to improve the classification of foreground objects by explicit class embedding. Our CGRSeg is extensively evaluated on ADE20K, COCO-Stuff, and Pascal Context benchmarks, and achieves state-of-the-art semantic performance. Specifically, it achieves 43.6% mIoU on ADE20K with only 4.0 GFLOPs, which is 0.9% and 2.5% mIoU better than SeaFormer and SegNeXt but with about 38.0% fewer GFLOPs.

Live content is unavailable. Log in and register to view live content