Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper examines the power of network architecture in isolation, without any contribution from synaptic weight training, to solve ML tasks. Specifically, the paper examines the extent to which neural networks with random weights can perform tasks if the architecture has been optimized appropriately. The authors provide a novel algorithm for conducting this optimization on architectures, and show that they can achieve surprisingly good results with random weights (e.g. ~80% on MNIST). This paper demonstrates the potential power of architecture optimization procedures, and provides a method for architecture optimization that may be very useful. The results may also be more broadly interesting with respect to questions of seeking appropriate inductive biases in ML. It will be of significant interest to the NeurIPS community.