NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper demonstrates theoretically that multiple forms of approximate Bayesian inference (Laplace approximation and variational inference) for deep neural networks are equivalent to Gaussian processes. The authors formalize this connection and write out the GP covariance function corresponding to these networks, which surprisingly turns out to be the neural tangent kernel. The authors also establish a connection to the training procedure of the neural network and GPs, which is a novel contribution. There is a growing literature on the connection between neural networks and Gaussian processes, with a variety of papers establishing the connection in the infinite limit of hidden units. This paper adds nicely to that literature, developing a connection to approximate Bayesian inference. The reviewers found the paper insightful and sensible. However, their main concern was with respect to empirical evaluation as they found the experiments underwhelming. However, the experiment involving optimizing the hyperparameters of a DNN via the marginal likelihood of its corresponding GP is a neat and novel concept. The recommendation is for acceptance as it is believed that this work provides some interesting insights and connections and opens up a variety of avenues of future work.