# NIPS Proceedingsβ

## Mean Field Residual Networks: On the Edge of Chaos

Pre-Proceedings

[PDF] [BibTeX] [Supplemental] [Reviews]

### Abstract

We study randomly initialized residual networks using mean field theory and the theory of difference equations. Classical feedforward neural networks, such as those with tanh activations, exhibit exponential behavior on the average when propagating inputs forward or gradients backward. The exponential forward dynamics causes rapid collapsing of the input space geometry, while the exponential backward dynamics causes drastic vanishing or exploding gradients. We show, in contrast, that by converting to residual connections, with most activations such as tanh or a power of the ReLU unit, the network will adopt subexponential forward and backward dynamics, and in many cases in fact polynomial. The exponents of these polynomials are obtained through analytic methods and proved and verified empirically to be correct. In terms of the edge of chaos'' hypothesis, these subexponential and polynomial laws allow residual networks to hover over the boundary between stability and chaos,'' thus preserving the geometry of the input space and the gradient information flow. For each activation function we study here, we initialize residual networks with different hyperparameters and train them on MNIST. Remarkably, our {\it initialization time} theory can accurately predict {\it test time} performance of these networks, mostly by tracking the expected gradient explosion of random residual networks. Importantly, we show, theoretically as well as empirically, that common initializations such as the Xavier or the He schemes are not optimal for residual networks, because {\it the optimal initialization variances depend on the depth}. Finally, we have made mathematical contributions by deriving several new identities for the kernels of powers of ReLU functions by relating them to the zeroth Bessel function of the second kind.