Skip to yearly menu bar Skip to main content


Poster

ExMatch: Self-guided Exploitation for Semi-Supervised Learning with Scarce Labeled Samples

Noo-ri Kim · Jin-Seop Lee · Jee-Hyong LEE

# 87
Strong blind review: This paper was not made available on public preprint services during the review process Strong Double Blind
[ ] [ Paper PDF ]
Fri 4 Oct 1:30 a.m. PDT — 3:30 a.m. PDT

Abstract:

Semi-supervised learning is a learning method that uses both labeled and unlabeled samples to improve the performance of the model while reducing labeling costs. When there were tens to hundreds of labeled samples, semi-supervised learning methods showed good performance, but most of them showed poor performance when only a small number of labeled samples were given. In this paper, we focus on challenging label-scarce environments, where there are only a few labeled samples per class. Our proposed model, ExMatch, is designed to obtain reliable information from unlabeled samples using self-supervised models and utilize it for semi-supervised learning. In the training process, ExMatch guides the model to maintain an appropriate distribution and resist learning from incorrect pseudo-labels based on the information from self-supervised models and its own model. ExMatch shows very stable training progress and the state-of-the-art performances on multiple benchmark datasets. In extremely label-scare situations, performances are improved by about 5% to 21% for CIFAR-10/100 and SVHN. ExMatch also demonstrates significant performance improvements in high-resolution and large-scale dataset such as STL-10, Tiny-ImageNet, and ImageNet.

Live content is unavailable. Log in and register to view live content