NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4697
Title:Private Learning Implies Online Learning: An Efficient Reduction

Reviewer 1


		
Summary: The goal of the paper is to show that, given a private PAC learner for a hypothesis class, one can do computationally efficient online no-regret learning with that class. The steps taken are: 1. Get a weak learner against oblivious adversaries as follows: sample the private PAC learner on a dummy dataset many times; use these hypotheses as experts with multiplicative weights. 2. Transform this to a weak learner against adaptive adversaries (cute, general reduction/observation) 3. Use online boosting to get a 'strong' learner from the weak one. Opinion: I think the paper's result is reasonably interesting and significant (though I am not an expert in this literature). The proofs are all clean and efficiently organized. On the other hand, the technical contribution is somewhat limited -- most of it is in step 1, the others are more like (nice!) observations. Originality: original results as far as I know (while studying a relatively standard setting and known open problem). Quality: Seems to be a high quality of writing and technical analysis. Clarity: High in my opinion. Significance: Medium in my opinion, perhaps not too high because of the relatively narrow focus and results. Comments: Line 2: suggest cutting "in games", doesn't seem accurate to the paper. Line 7-8: a bit confusing. I think you reduce the problems in the opposite direction stated. Or, you online learn by efficiently calling d.p. learning as a black box. Consider clarifying. Line 60: period should be a semicolon? Line 95-96: notation <= with squiggly should be defined, or use big-O How Def 6 interacts with the oblivious/non adversary definition is not clear. What is the expectation over in Def 6? The sequence must be fixed in advance, so presumably there is no "adversary" at all, but clarifying may be smart. Algorithm 1, first line: for some reason this references as "Def 3.1" but points to Def 8. Line 231: implies -> imply Please respond to any part of my review you think helpful. ---- Post-discussion: Thank you for the informative response. I agree that a bit of discussion on difficulty of extensions would be valuable.

Reviewer 2


		
This is a very succinct paper and I think that it serves as a nice theoretical contribution. One exciting feature is that the reduction builds on the online boosting framework of Beygelzimer, Kale, and Luo - to my knowledge this is the first theory result that uses their technique as a subroutine. The reduction from oblivious to adversarial adversaries seems new as well. One comment I have would be to spend a bit more time discussing *why* the techniques work. In particular, I found the result of Lemma 9 rather counterintuitive: It essentially says that if we run any pure DP pac learning algorithm a constant number of times on any arbitrary dataset, one of the outputs will have nontrivial classification accuracy on an unrelated target distribution with constant probability. If this hasn't been noted anywhere before it seems worth mentioning as a structural result. Some misc typos/comments: * Proof of lemma 9: $m$ and $m_{0}$ seem to be confused in various places/not defined. * 173: Shouldn't this be $m(1/4, 1/4)$ rather than $m(1/4, 1/2)$? * between line 181 and 182: Should this be "with probability at least 1-1/4" rather than "with probability at least 1/4"? * Where is $\mathcal{H}(\mathcal{D},c)$ defined?

Reviewer 3


		
Originality: This work resolves an open question. The approach is based on a characterization of private learning as well as recent progress in online learning and boosting. Quality: The technical claims in the paper are supported by valid proofs. While there might be slight inaccuracy/typo in the statement of lemmas and theorems (see Part 5 (1) and (2)), these issues can be easily fixed and do not affect the validity of results. Clarity: The paper is beautifully written and very easy to understand. In particular, Section 1.1 gives a clear overview of their proof and also suggests some technical issues that have to be addressed. The full proof is very structured and easy to follow. Significance: This work resolves an open question regarding the relation between online learning and differential privacy. While I am not an expert on these topics, this result seems solid and significant to me. *** added after author feedback *** I have read the author feedback and thank the authors for answering the questions.