Mini-Batch Optimization of Contrastive Loss
Abstract
In this paper, we study the effect of mini-batch selection on contrastive loss and propose new mini-batch selection methods to improve efficiency. Theoretically, we show that both the full-batch and mini-batch settings share the same solution, the simplex Equiangular Tight Frame (ETF), if all (N B) mini-batches are seen during training. However, when not all possible batches are seen, mini-batch training can lead to suboptimal solutions. To address this issue, we propose efficient mini-batch selection methods that compare favorably with existing methods. Our experimental results demonstrate the effectiveness of our proposed methods in finding a near-optimal solution with a reduced number of gradient steps and outperforming existing mini-batch selection methods.