NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7860
Title:User-Specified Local Differential Privacy in Unconstrained Adaptive Online Learning

Reviewer 1


		
Summary: The paper investigates an online convex optimization problem where gradients of loss functions are given combined with a random noise form an unknown symmetric distribution and the norm of the gradients are not bounded. The authors propose an algorithm which achieves state-of-the-art regret bounds in terms of the expected norm in the new noisy settings. The algorithm consists of a reduction to a one-dimensional OCO (coin betting) and an OCO. Comments: The theoretical analyses are solid and strong and subsume previous work, e.g., Jun-Orabona-COLT19. However, I feel that the topic and results of this paper looks quite similar to the previous COLT paper. In fact, algorithms are also similar. I also wonder how much the local differential private setting is related to the noisy OCO setting and I am afraid the connection is not so strong. Overall, the theoretical results are beyond standard ones, but the similarity to Jun-Orabona-COLT19 makes me feel that the contribution is a bit incremental. After reading the rebuttal comments, I understand the technical contribution more and raised my score.

Reviewer 2


		
Achieving data dependent bounds for OCO is an important topic in recent years, and it has been solved for OCO with bounded gradients. For noisy OCO, a recent COLT paper makes some progress towards the desired goal, which achieves a competitor dependent bound. However, that result still depends on Lipschitz constant rather than noisy gradients received in the game. In this paper, authors solved above problem for a special but important case of noisy OCO, which requires symmetric random gradients. The main framework is based on reward-regret duality paradigm with a carefully chosen potential function in terms of symmetric random gradients. Based on above adaptive noisy OCO algorithm, authors used it to solve OCO under LDP constraint, which naturally satisfies the condition about symmetric random gradients, and obtained corresponding data dependent bound. Besides, authors also designed a sparse gradient version, which might achieve better performance when the competitor or sub-gradients are sparse. Though how to achieve a completely data dependent regret bound for general noisy OCO is not clear yet, this paper solves the problem for a quite important case (i.e. symmetric random gradients), which can be directly used for OCO under LDP constraint and achieve corresponding data dependent bound. For this reason, I tend to accept this paper. Below are some comments and suggestions: 1. In equation (1), $\sigma_t$ should be $\sigma_t^2$; 2. In line 196-197, what does the word “multiplying 1-E[v\tilde{g}_t] for t=1,2,…” mean? Please make it clear; 3. How is the concrete update form of w_t (i.e. equation (7) in main part and equation (4) in appendix) obtained? Please make them clear; 4. In line 240, “turn in into” should be “turn it into”; 5. In line 260, “use” is redundant; 6. In line 290, I feel confused about the claim “\xi_t can be sparse as well”. Since \tau_{t,j} determines the degree of noise \xi_{t,j} and there is an upper bound of each \tau_{t,j} (say privacy budget \eps), \xi_{t,j} cannot be arbitrarily small, thus I think \xi_t is still a dense vector.

Reviewer 3


		
The paper deals with an interesting topic. Proposal is novel, sound, supported by theoretical results and convincing.