Single Positive Multi-Label Learning (SPML) is a multi-label classification task in which each image is assigned only one positive label but the other labels are not annotated. Most approaches for SPML assume unannotated labels as negatives (``Assumed Negative", AN). However, with this assumption, some positive labels are inevitably regarded as negative (false negative), resulting in model performance degradation. Therefore, identifying false negatives is the most important with AN assumption. Previous approaches identified false negative labels using the model outputs of assumed negative labels. However, models were trained with noisy negative labels, their outputs were not reliable. Therefore, it is necessary to consider effectively utilizing the most reliable information in SPML for identifying false negative labels. In this paper, we propose an Information Gap-based false negative Loss Rejection method (IG-LR) for SPML. We generate the masked image that all parts are removed except the discriminative area of the single positive label. It is reasonable that when there is no information of an object in the masked image, the model’s logit for that object is low. Based on this intuition, we identify the false negative labels if they have a significant model’s logit gap between masked image and original image. By rejecting false negatives in the model training, we can prevent the model from being biased to false negative labels, and build more reliable models. We evaluate our method on four datasets: Pascal VOC 2012, MS COCO, NUSWIDE, and CUB. Compared to the previous state-of-the-art methods in SPML, our method outperforms on most of the datasets.
Live content is unavailable. Log in and register to view live content