__ Summary and Contributions__: The authors analyze the adversarial robustness of the supervised sparse coding model. They considered a model that involves learning a representation while at the same time giving a precise generalization bound and a robustness certificate. And the analysis is demonstrated by experiments.

__ Strengths__: The theoretic analysis is sufficient. The authors present a generalization bound on the robust risk for the supervised sparse coding model class, and address that the output of the trained model does not change for norm-bounded adversarial perturbations.

__ Weaknesses__: The experimental parts could be augmented.

__ Correctness__: Yes, the claims and method are correct. The empirical methodology is correct.

__ Clarity__: Yes, the paper is well written.

__ Relation to Prior Work__: Yes, it is clearly discussed how this work differs from previous contributions.

__ Reproducibility__: Yes

__ Additional Feedback__: Post-rebuttal update:
The authors' response is very nice and detailed, I stay the same score

__ Summary and Contributions__: This papers contributes new theoretical results on the adversarial robustness and generalization of a specific hypothesis class - a linear classifier on top of a sparsity-promoting encoder. They leverage the “encoder gap” of the encoder to derive both a generalization bound for adversarial test accuracy and a certificate for robustness. They provide preliminary experimental results on synthetic data and MNIST.

__ Strengths__: This work derives new theoretical results for adversarially robust generalization of a specific hypothesis class, which is significant because adversarially robust generalization is not a well-understood topic. Furthermore, I do not believe this particular model (of a linear classifier + a sparsity-promoting encoder) has been studied extensively for adversarial robustness before, and these results may encourage further work in similar directions.

__ Weaknesses__: The experimental results are somewhat weak. First of all, many of the theoretical claims rely on the encoder gap, and there is only one experimental evaluation of this value a real dataset (MNIST). Even then, MNIST is a very simple dataset, so I am worried about the generalizability of these experiments. Regardless of the value of this encoder gap on other datasets, I would like to see more evaluations (e.g. for something like CIFAR) just because it’s important to understand this quantity to judge if the theoretical contributions will be practically helpful.
Furthermore, it is not clear that sparsity-promoting encoders are the right models to be studying, but they do represent a more complex model than linear classifiers.

__ Correctness__: I did not check the math thoroughly, but the parts in the main text and the theoretical results all look reasonable. There are a few assumptions that the theoretical results rely on, but one is reasonable while the second (regarding the encoder gap) is evaluated empirically (though the evaluation should be more extensive). The empirical methodology also appears correct.

__ Clarity__: Overall, the clarity is good. A few points can be clarified.
1. Given the importance of the encoder gap, having a section explaining and analyzing just the encoder gap (e.g. explaining some of the prior work of Mehta and Gray, 2013 as well) might be clearer.
2. In Theorem 4.1, there are many variables that are used that require searching through the text to understand what they are (e.g. the variable b). I prefer the presentation of Theorem 5.1, where all the relevant variables are clearly explained in the Theorem.
3. For Theorem 4.1, it would be good to explain how your result compares to prior theoretical bounds on adversarially robust generalization.

__ Relation to Prior Work__: Yes. Prior work is cited extensively. A bit more background on the encoder gap (Mehta and Gray, 2013) could be explained in this work too.

__ Reproducibility__: Yes

__ Additional Feedback__: Overall, the work achieves new and interesting theoretical results for the model being studied. My main worry is the lack of experimental results on the encoder gap for datasets beyond MNIST, especially given that the size/existence of the encoder gap is crucial to the theoretical results and is an assumption made in the theoretical claims.
Thus, I would highly recommend at least evaluating the encoder gap for other (more complex than MNIST) datasets. Many techniques that work well on MNIST may not work on other datasets due to MNIST’s relative simplicity. For example, a network that binarizes pixel values (converts everything below 0.5 to 0, everything above to 1) and then classifies the result is quite adversarially robust, but the same technique will not work for more complex datasets.
The paper is borderline for me; I am happy to change my score if the encoder gap can be explained further.
------
After reading all comments from fellow reviewers and the author feedback, I have increased my score from a 5 to a 6 (marginally above the acceptance threshold). The authors provided good explanations of when the encoder gap is non-negative (as long as you make s large enough, it will be), and also justification for why studying sparse coding models is a first step towards studying more general neural networks. I am not familiar enough with the background to be more confident than a 6 in my score.
I hope the authors will include the new experimental evaluation and explanations in future revisions (e.g. include explanations in the main body if it makes the paper more clear, and additional experimental info in the Appendix)

__ Summary and Contributions__: This paper proposes to apply a sparsity-promoting encoder followed by a linear classifier, which is guaranteed with robustness certificates for classification problems. The proposed method rooted in sparse coding is very intuitive, and the authors also demonstrate the effectiveness of their method on the MNIST dataset.

__ Strengths__: 1). The proposed method enjoys nice theoretical guarantees in terms of robust certificates. The robust certificates derived in this paper are relatively natural in the context of sparse coding (for example, under the encoder gap assumption).
2). In order to provide robust certificates, the proposed method could provide deterministic bounds with much less computation compared with random smoothing.
3). The method could explicitly leverage the potential low-dimensional structure of the data to better defend against adversarial attacks.

__ Weaknesses__: 1). It seems that the method is limited to the linear case as well as the incoherent assumption. It is possible that the data does not satisfy the incoherent assumption or the data could not be represented by linear combinations of columns in D.

__ Correctness__: It is wired that in Figure 2(b), the accuracy is not monotonically decreasing with regard to the adversarial budget.

__ Clarity__: This paper is very well written and easy to follow.

__ Relation to Prior Work__: Yes.

__ Reproducibility__: Yes

__ Additional Feedback__: *** Post-rebuttal ***
Thanks for the authors' response. I have carefully read the authors' responses as well as other reviewers' comments. My concerns are addressed in the author response. I think the approach proposed in this paper provides another perspective to study/improve robustness and the method itself is intuitive and theoretically grounded. I think this is a good submission. I will keep my original score and suggest an acceptance.
----------------------------------------------------------------------------------------------------
1). Although the method is simple and easy to follow, maybe it is better to describe the method in the form of an algorithm and list it in the appendix if there is not enough space in the main body.
2). It is better to provide the code for training and verification for the MNIST experiment.

__ Summary and Contributions__: This paper focuses on study the adversarial robustness of the supervised sparse coding model.
It provides a uniform generalization bound for the robust risk and a robustness certificate for the learned hypotheses. It can help analyze the representation computed by sparse coding models.

__ Strengths__: This paper provides a detailed study on model robustness of the very specific space coding model. It can be beneficial to the theoretical study within this community.

__ Weaknesses__: The paper studies a relatively narrow problem, which makes it less appealing to wide range of communities.
In my perspective, the writing lacks clarity and is hard to follow. Also, it is in inappropriate to mark the text with colors for formal submission.

__ Correctness__: I can not objectively evaluate its correctness, due to my limited knowledge about this area.

__ Clarity__: The paper is writing elaborately. It is written redundantly, making it relatively hard to follow.

__ Relation to Prior Work__: The paper studies a robustness problem for a classic model. Relevant references are adequately presented as preliminaries. However, there seem to be no relative works for parallel comparison regarding contributions.

__ Reproducibility__: Yes

__ Additional Feedback__: I have read the author response and the comments of other reviews.
It is claimed that this work provides a nice and convincing theoretical guarantees robust certificates for sparse coding.
I would increase my score to 6
and defer to other reviews'comments for final decision.
Thank you.