NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2880
Title:Input Similarity from the Neural Network Perspective

Reviewer 1


		
This main paper contribution is a similarity measure that at its core is the inner product of network gradients at two given data points, which distance we want to measure. It then develops the formula for shared-weights multiclass classifier so one can apply this measure to modern neural nets. The presented theory seems to be well established and original. The introduction, specifically the motivation part and similarity section, is quite clear and invites the reader to dive in into the details. Good job. The theory is developed step by step, from simple NNs to modern ones, in an easy to follow way. The paper might benefit from higher significance: The developed approach has a lot of potential insights; however, its primary purpose: to better understand the self-denoising effect, was weakly achieved. Moreover, experiments of successful applications of the method are lacking. One thing that worries me is that this statistic can be highly unstable during training and therefore not usable in practice. minor points: -Pg. 7 doesn't have all of its line numbers. -Double-Check math notations, as in pg. 7 var expression. -figure 4. doesn't seem to have a log scale. -section 2.4 notations are confusing, a network diagram/more explaining of notations or anything the authors find useful should help. -lines 124 -139 are unclear, specifically the rotation invariance and information metric aspect (and here also, missing line numbers). -More straightforward outline: after reading the introduction, one expects that after the theory of the measure and density estimation (sections 2-4), there will be an application section. The "enforcing similarity" (section 5) follows, which relates to the training stage and distracts from the main point. There are too many mini-topics, which makes it harder for the reader to follow: for example, in section 4, density estimation branches off to simulation description, overfit analysis, and uncertainty analysis, which I find confusing.

Reviewer 2


		
After reading the rebuttal, I tend to agree to changing my review as well conditioned on the following: 1: The structure of the paper should be revised to move the entire validation section to validation. The split validation section (indeed at least half of the validation is presented in the Introduction) is a large detriment to the quality of the paper. 2: An introduction section needs to be added to well frame the problem and mention related work. I don't like the "motivation-as-introduction" section as it does not present a wide scope within which the paper lies. 3: A clearer well presented argument as to precisely how the dataset self-denoising experiment is relevant. The rebuttal does a great job of this and it leaves me convinced of its strong application to real world datasets in general purpose training. Originality: The proposed method is highly novel and original. Within the scope of interpretability, there are no clear metrics to evaluate input similarity. This is further compounded as neural networks remain opaque which is increasingly becomes an issue as model interpretability becomes a leading concern. The related work is not well fleshed out and could use improvement. For example there may be a connection between the proposed method and influence functions. Consider also that generative models (in particular GANs as well) provide some notion of similarity in latent variable space. This connection deserves some highlighting. Quality: The submission is of mostly high quality. The proposed method has many useful theoretical properties that are well highlighted in the paper with insightful proofs and commentary. The empirical validation is a bit lacking as no clear use case is identified. Clarity: The paper is well written and well structured. It is easy to read and the content is well presented. The proofs are of high quality, and are easy to follow. Significance: The paper is of high significance. I think that the proposed approach and direction are both likely to yield further results. The proposed tool is likely to be useful as a visualization tool "as-is", however it is also likely to be useful as an interpretability tool in further work.

Reviewer 3


		
=== Post rebuttal === Thanks for addressing most of my concerns and adding experiments, and also great job at the added self-denoising analysis, I think it's a very cool application of the proposed measure! I'm updating my score from 4 to 6. === Original review === The paper is very clearly written and has a nice mixture of formal statements with intuitive explanations and examples. My main concern is with numerical experiments. There are 3 experiments: a) on a toy dataset to show that the similarity measure makes sense (and it does); b) on MNIST to show that one can use the proposed measure to speed up the training; c) on a dataset of satellite imagery to show that the proposed measure can be used to get insights about what's going on in the network and in the training process. However, the results on MNIST are not impressive at all (a tiny sped up on a very small dataset, and both methods converge to 80% validation accuracy, while on MNIST it should be around 98-99%). Also, none of the experiments has comparison against any baselines. At least the comparison against the perception losses (which is discussed in the paper to be similar but less justified, see lines 98-99) should be included. Experiment c) is supposed to show how to get insights by using the proposed measure, but I don’t understand what new knowledge/intuition about the network and the problem was obtained during the experiment. Similar to experiment a), experiment c) mainly just shows that the proposed measure makes sense, which is not enough to justify using it. Some less important points While theorem 1 is an interesting result and seem to motivate the (also very interesting) link to the perceptual loss (on line 98-99), it felt a bit left out: I’m not sure if this theorem was directly used anywhere throughout the paper. Not saying that it should be removed, but might be linked to other sections somehow? There might be interesting connections of the proposed measure to Deep image priors [1] (which, similar to the satellite images experiment, has the flavor of “denoising based on similarity between patches”) and to cycle-consistency papers for semi-supervised learning, e.g. [2, 3], which is close to the Group invariance idea mentioned on line 197. Calling the proposed measure “proper” compared to other measures seems a bit too far fetched. Line 191, there seem to be forgotten absolute value in the denominator [1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Deep image prior." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. [2] P. Bachman, O. Alsharif, and D. Precup. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pages 3365–3373, 2014. [3] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. International Conference on Learning Representations, 2017.