Open-set semi-supervised learning (OSSL) leverages practical open-set unlabeled data for semi-supervised learning (SSL). Open-set unlabeled data comprises in-distribution (ID) samples from seen classes and out-of-distribution (OOD) samples from unseen classes. Prior OSSL methods initially learn the decision boundary of ID and OOD with labeled ID data, followed by self-training to enhance it. These methods, however, suffer from the tendency to overtrust the labeled ID data: the distribution bias between the limited labeled samples and the entire ID data misleads the decision boundary to overfit. The subsequent self-training process, based on the overfitted result, fails to rectify this problem. In this paper, we address the overtrusting issue by treating OOD as an additional class and forming a new SSL process. Specifically, we propose SCOMatch, a novel OSSL method that 1) selects reliable OOD samples as new labeled data by our OOD memory queue and corresponding update strategy, and 2) integrates the new SSL process into the original task through our \textbf{S}imultaneous \textbf{C}lose-set and \textbf{O}pen-set self-training. SCOMatch refines the decision boundary of ID and OOD classes across the entire dataset, thereby leading to a better result. Extensive experimental results show that SCOMatch significantly outperforms the state-of-the-art methods on various benchmarks. Meanwhile, the effectiveness is verified through ablation studies and visualization.
Live content is unavailable. Log in and register to view live content