Skip to yearly menu bar Skip to main content


Poster

Pseudo-Labelling Should Be Aware of Disguising Channel Activations

Changrui Chen · Kurt Debattista · Jungong Han

# 117
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Thu 3 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

The pseudo-labelling algorithm is highly effective across various tasks, particularly in semi-supervised learning, yet its vulnerabilities are not always apparent on benchmark datasets, leading to suboptimal real-world performance. In this paper, we identified some channel activations in pseudo-labelling methods, termed disguising channel activations (abbreviated as disguising activations in the following sections), which exacerbate the confirmation bias issue when the training data distribution is inconsistent. Even state-of-the-art semi-supervised learning models exhibit significantly different levels of activation on some channels for data in different distributions, impeding the full potential of pseudo labelling. We take a novel perspective to address this issue by analysing the components of each channel's activation. Specifically, we model the activation of each channel as the mixture of two independent components. The mixture proportion enables us to identify the disguising activations, making it possible to employ our straightforward yet effective regularisation to attenuate the correlation between pseudo labels and disguising activations. This mitigation reduces the error risk of pseudo-label inference, leading to more robust optimization. The regularisation introduces no additional computing costs during the inference phase and can be seamlessly integrated as a plug-in into pseudo-labelling algorithms in various downstream tasks. Our experiments demonstrate that the proposed method achieves state-of-the-art results across 6 benchmark datasets in diverse vision tasks, including image classification, semantic segmentation, and object detection.

Live content is unavailable. Log in and register to view live content