__ Summary and Contributions__:
Review Update: I agree that reducing dependence on privacy parameters, even marginally in some regimes, can have great practical
importance and impact. I'm comfortable sticking with my score however.
This paper studies the problem of private distribution testing, giving algorithms and bounds that improve the dependence of the error on the dimensionality, and hence scale better to high dimensions. Informally, they give results for
(1) distinguishing the uniform distribution over (0,1)^d from anything alpha close in TV (uniformity testing over the cube)
(2) They can reduce testing the mean of two Gaussians to uniformity testing over the cube
(3) They can reduce testing of specific product distributions called "extreme" to univariate testing, and thus use lower bounds from univariate testing to provide the first known lower bounds for private product testing.
(1) is the key result - and they give two algorithms, one efficient, and one not. Both rely on the idea of privately computing a test statistic that has high global sensitivity and low average sensitivity by recursively taking Lipschitz extensions. In general computing the extension is inefficent, but it can be done approximately giving an efficient algorithm with worse sample complexity.
Their bounds give an additive cost for privacy that only scales like sqrt(dimension) rather than linear. For epsilon = Omega(alpha) their bound shows that you can get privacy for free.

__ Strengths__: - new results for privately computing a fundamental statistical primitive
- interesting use of Lipschitz extensions to reduce global sensitivity to average sensitivity while still maintaining privacy, although this is similar to prior work
- novel lower bounds for this problem

__ Weaknesses__: * unclear how practically important improving the bound from (1/epsilon) multiplicatively to an additive term really is - in many applications epsilon could be O(1)

__ Correctness__: - yes

__ Clarity__: - yes

__ Relation to Prior Work__: - yes

__ Reproducibility__: Yes

__ Additional Feedback__: This is thorough work that represents a definite contribution to private hypothesis testing. What prevents it from receiving a higher score is that I'm not sure how significant the results are, and of how much interest they are to the Neurips community.

__ Summary and Contributions__: The paper studies the problem of privately testing whether a set of data samples is drawn from the uniform distribution, or \alpha far from it. The paper also provides solution to generalizations of this problem, e.g., biased product distributions and Gaussian distribution with diagonal covariance. The paper provides improved sample complexity bounds that has better dependence on dimensions, and show that without computational assumptions the sample complexity is almost optimal in certain parameter regimes.

__ Strengths__: 1. The paper is very well-written, and the results are very easy to follow.
2. The paper uses (by now standard techniques) like using proxy statistics which are stable on typical data sets, and techniques of Lipschitz extension in a fairly non-trivial way.
3. May be I am new to this line of research on property testing with differential privacy, but I found the technique of privacy filtering to be quite novel.

__ Weaknesses__: I am not too much familiar with the area to criticize the paper much, beyond requesting the authors to demonstrate some motivation for practical importance of these problems in real-world settings of design of privacy preserving algorithms.

__ Correctness__: The paper seems to be correct intuitively.

__ Clarity__: Yes.

__ Relation to Prior Work__: Yes.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The paper gives algorithms and techniques for differentially private hypothesis testing for relatively simple distributions in high dimensions.
The main test case is to distinguish a sample from a uniform distribution on the d-dimensional hypercube to a product distribution over {-1,1}^d that is "far" from uniform in total variation distance

__ Strengths__: The paper gives two interesting techniques to perform these tests.
1) Lipschitz extensions. This gives good sample complexity bounds but inefficient
2) Private filtering

__ Weaknesses__: Some of the better bounds are inefficient, some of the bounds are suboptimal

__ Correctness__: What i looked at seems ok, did not verify all

__ Clarity__: yes

__ Relation to Prior Work__: I am not too familiar with prior work on private hypothesis testing

__ Reproducibility__: Yes

__ Additional Feedback__: I have read the author's response

__ Summary and Contributions__: Differential privacy is an important area of continued research with increased applicability in industry. This paper derives an optimal hypothesis test with differential privacy. The results are theoretic in nature but clearly quantify the cost of privacy in terms of the tests themselves. The approach itself is clearly derived and focusses on the average case in order to manage challenges associated with metrics of extreme values (the dependence of the algorithm on single observations which in term conflict with the definitions of differential privacy).

__ Strengths__: The paper was clear, well-motivated and offer concrete theoretical contributions. The authors described the motivations, algorithmic design, theoretical guarantee and broader impact of the approach. While there was no empirical evidence to support the approach, the argument via optimal guarantees, correctness and associated theorems left the paper complete. I believe it will be of interest to the NeurIPS community.

__ Weaknesses__: Unfortunately, the paper was not the most readable. The ordering of the sections jumped around a lot (intro, background, results, techniques, related work, core section, impact). I would have suggested that background and related works are tied together and results and impact due to their natural relations. Nevertheless, this is merely nitpicking and there are not many overwhelming concerns with the work. One question that the reader is left with however is the applicability of the approach in practice and whether the additional cost of privacy (despite the obvious benefits) leads to poor hypothesis testing in the wild.

__ Correctness__: There is no obvious issue with the papers theory itself as far as I am aware.

__ Clarity__: Please see weaknesses.

__ Relation to Prior Work__: The paper links to related research such as hypothesis testing and its applicability in the ML community and the Lipschitz-extension technique. Differential privacy literature and related topics are assumed to be well understood by the reader.

__ Reproducibility__: Yes

__ Additional Feedback__: