NeurIPS 2020

Can the Brain Do Backpropagation? --- Exact Implementation of Backpropagation in Predictive Coding Networks


Meta Review

Following the author response, we had a long discussion. On the positive side, this is the first algorithm with local update rules that exactly simulates BP (at least asymptotically, given complete convergence at the initialization). On the negative side, all reviewers agreed this algorithm has some reduced plausibility. Specifically, in IL (original PCN) we have to present both input and output, and wait sufficient time until convergence. In contrast, in Z-IL and Fa-Z-IL, we have to first present (only) the input, also wait sufficient time until convergence, and then present the output; In addition, the learning rule becomes more complicated (through the introduction of the Phi function) and we must detect when "the change in error node is caused by feedback input" (which seems to require some global signals). This seems more complicated and less plausible then the original IL. Another smaller issue is condition C3, which may become troublesome in a realistic continuous-time setting. Moreover, IL itself is not highly plausible. There, each weight layer is exactly duplicated through identical initial conditions and learning rules. This sounds unrealistic and possibly unstable (would the two layers diverge, given some noise or mismatch in init or dynamics?). This should be further clarified, either theoretically or empirically In the end it we were all borderline, mostly leaning towards acceptance. I strongly recommend to clarify the above issues in the camera-ready version (as well as additional issues mentioned by the reviewers), to improve the quality and impact of the manuscript.