This paper makes connections between circuit complexity and expressiveness of neural networks as a function of their depth. Results in this paper relate the problem of "depth separation" (showing that adding depth increases the expressiveness) to circuit lower bounds for threshold circuits. They also prove that under some conditions on the distribution w.r.t. which the approx happens, one can assume that the weights of the network are polynomially bounded. The reviewers found the research direction of this paper compelling: making connections between neural networks and circuit complexity. I recommend this paper for acceptance.