You are here

Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning

Authors: 

Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, Kevin McGuinness

Publication Type: 
Refereed Original Article
Abstract: 
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudolabels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR10/100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at https://git.io/fjQsC.
Digital Object Identifer (DOI): 
10.NA
Publication Status: 
Published
Publication Date: 
09/08/2019
Journal: 
Arxiv
Research Group: 
Institution: 
Dublin City University (DCU)
Open access repository: 
Yes