Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The authors propose a phenemenological model of the loss landscape of DNNs - they devise the landscape as a set of high dimensional wedges whose dimension is slightly lower than the dimension of the full space, and how the optimizer traverses the loss landscape for common hyperparameter choices. Overall speaking, this paper provide interesting insights to deep learning, although it is not very clear how the insights could be used to improve the training process of deep neural networks yet (to both optimization and generalization). One problem with the paper is its presentation. Some of the reviewers have confusions after reading the paper. It would be critical for the authors to improve their writings (the text and figures) significantly in order to make it more accessible to the audience.