Skip to yearly menu bar Skip to main content


Poster

Enhanced Sparsification via Stimulative Training

Shengji Tang · Weihao Lin · Hancheng Ye · Peng Ye · Chong Yu · Baopu Li · Tao Chen

# 33
[ ] [ Paper PDF ]
Wed 2 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Sparsification-based pruning has been an important category in model compression. Existing methods commonly set sparsity-inducing penalty terms to suppress the importance of dropped weights, which is regarded as suppressed sparsification paradigm. However, this paradigm inactivates the to-be-dropped part of networks causing capacity damage before pruning, thereby leading to performance degradation. To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose an enhanced sparsification paradigm framework named STP for structured pruning, which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation. Besides, to obtain a relatively optimal architecture of the final network, we propose multi-dimension architecture space and a knowledge distillation guided exploration strategy. To reduce the huge capacity gap of distillation, we propose a subnet mutating expansion technique. Extensive experiments on various benchmarks indicate the effectiveness of STP. Specifically, without pre-training or fine-tuning, our method consistently achieves superior performance at different budgets, especially under extremely aggressive pruning scenarios, e.g., remaining 95.11% Top-1 accuracy (72.43% in 76.15%) while reducing 85% FLOPs for ResNet-50 on ImageNet. Codes will be released.

Live content is unavailable. Log in and register to view live content