NeurIPS 2020

Implicit Regularization in Deep Learning May Not Be Explainable by Norms


Meta Review

A strand of research that has emerged recently in theoretically understanding the success of deep learning suggests that implicit regularization due to the choice of optimization algorithms and other heuristics may play an important role. This paper dispels this notion by formally showing that in some problems (such as matrix completion using linear networks), implicit regularization drives all norms towards infinity. Authors also suggest that implicit regularization via minimization of rank may be more useful in terms of explaining generalization in deep learning. One of the concerns that reviewers expressed was that the theoretical results focused primarily on matrix factorization and learning linear networks. While the authors provide empirical evidence that their results may extend to nonlinear neural networks, the reviewers suggested that the paper’s positioning (and the title) would be more accurate if it were to focus on matrix problems rather than deep learning. The paper reads very well, and the results and insights in the paper are very compelling. Overall, a good paper. Accept!