Part of Advances in Neural Information Processing Systems 35 (NeurIPS 2022) Main Conference Track
Jialun Zhang, Hong-Ming Chiu, Richard Y Zhang
The matrix completion problem seeks to recover a d×d ground truth matrix of low rank r≪d from observations of its individual elements. Real-world matrix completion is often a huge-scale optimization problem, with d so large that even the simplest full-dimension vector operations with O(d) time complexity become prohibitively expensive. Stochastic gradient descent (SGD) is one of the few algorithms capable of solving matrix completion on a huge scale, and can also naturally handle streaming data over an evolving ground truth. Unfortunately, SGD experiences a dramatic slow-down when the underlying ground truth is ill-conditioned; it requires at least O(κlog(1/ϵ)) iterations to get ϵ-close to ground truth matrix with condition number κ. In this paper, we propose a preconditioned version of SGD that preserves all the favorable practical qualities of SGD for huge-scale online optimization while also making it agnostic to κ. For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, we prove that the preconditioned SGD converges to ϵ-accuracy in O(log(1/ϵ)) iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with κ=1. In our numerical experiments, we observe a similar acceleration forill-conditioned matrix completion under the root mean square error (RMSE) loss, Euclidean distance matrix (EDM) completion under pairwise square loss, and collaborative filtering under the Bayesian Personalized Ranking (BPR) loss.