NIPS Proceedingsβ

On Lazy Training in Differentiable Programming

Part of: Advances in Neural Information Processing Systems 32 (NIPS 2019) pre-proceedings

[PDF] [BibTeX] [Supplemental]


Conference Event Type: Poster


In a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this ``lazy training'' phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that ``lazy training'' is behind the many successes of neural networks in difficult high dimensional tasks.