About contrastive unsupervised representation learning for classification and its convergence

Published in submitted, 2020

Ibrahim Merad, Yiyang Yu, Emmanuel Bacry and Stéphane Gaïffas

Abstract. Contrastive representation learning has been recently proved to be very efficient for self- supervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide conver- gence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings.

PDF Download paper here