Structural re-parameterization (SRP) is a novel technique series that boosts neural networks without introducing any computational costs in inference stage. The existing SRP methods have successfully considered many architectures, such as normalizations, convolutions, etc. However, the widely used but computationally expensive attention modules cannot be directly implemented by SRP due to the inherent multiplicative manner and the modules' output is input-dependent during inference. In this paper, we statistically discover a counter-intuitive phenomenon \textit{Stripe Observation} in various settings, which reveals that channel attention values consistently approach some constant vectors during training. It inspires us to propose a novel attention-alike SRP, called ASR, that allows us to achieve SRP for a given network while enjoying the effectiveness of the attention mechanism. Extensive experiments conducted on several standard benchmarks show the effectiveness of ASR in generally improving the performance of various scenarios without any elaborated model crafting. We also provide experimental and theoretical evidence for how the proposed ASR can enhance model performance.
Live content is unavailable. Log in and register to view live content