NIPS Proceedingsβ

Implicit Bias of Gradient Descent on Linear Convolutional Networks

Part of: Advances in Neural Information Processing Systems 31 (NIPS 2018)

[PDF] [BibTeX] [Supplemental] [Reviews]


Conference Event Type: Poster


We show that gradient descent on full-width linear convolutional networks of depth $L$ converges to a linear predictor related to the $\ell_{2/L}$ bridge penalty in the frequency domain. This is in contrast to linearly fully connected networks, where gradient descent converges to the hard margin linear SVM solution, regardless of depth.