NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:4587
Title:Facility Location Problem in Differential Privacy Model Revisited

Reviewer 1

This paper studies the facility location problem subject to a privacy constraint that is a relaxation of differential privacy. Although the paper calls it “super-set output setting”, the privacy model is the same as joint differential privacy (JDP), which is studied in other works usually related to algorithmic game theory. Essentially, the super-set output setting is the “billboard” used to show all the users and then the users can use their own private information to determine their action based on the billboard. Although the main paper does not make this connection explicit, it is covered in the supplementary file. That being said, I think the paper is more interesting when presenting the DP case as too stringent and thus seeing what can be achieved with the known relaxation of JDP. In the introduction, the paper states a nice result that shows more can be achieved with this relaxed privacy model, thus showing a gap between the privacy models. The paper would be significantly stronger if the lower bound 1/\sqrt{\eps} matched the upper bound 1/eps. However, the paper still shows that we can still obtain decent utility using relaxations of differential privacy when DP is too stringent of a condition. However, I am unsure about the accuracy of the upper bound result. In particular, Algorithm 1 and Algorithm 2 do not have epsilon in the pseudocode (just that epsilon is an input). I am unsure how Algorithm 1 would run, as I do not know what L and L’ are (are they fixed levels in the tree?). Is there some condition on L’ or f that ensures Lemma 1? It seems like critical information is not in the paper. I am unable to see how the privacy parameter affects the algorithms, despite it being an input parameter. I did not verify the lower bound analysis. Comments: - Explain in the related work section why previous methods cannot be used. - In the Preliminaries “Also denote … denote” - What are “LDP algorithms" (see beginning of Section 3) - Page 4: “verticies” #### Because of the author feedback I have increased my score from 4 to 5 since the feedback pointed out the dependence of the privacy parameter in the algorithm. Thank you for that! The paper would significantly benefit from a thorough revision of typos and clearly stating the dependencies of various parameters. I also think the work would be benefit from framing the privacy analysis around Joint Differential Privacy, since it could then be placed in that line of work. At the very least the similarity in their setting to JDP should not be pushed to the supplementary material.

Reviewer 2

originality: novel use of HST metric embedding and algorithm specialized to HST metrics. clarity: The overall thrust of the paper is clear. however, the grammar and writing is consistently slightly wrong throughout. significance: these results significantly improve SOTA for the facility location problem under the constraint of differential privacy. comment: * why frame this problem as the super output setting instead of framing it as an allocation problem subject to Joint Differential Privacy (JDP), since that is how the solution is framed.

Reviewer 3

The paper improves the approximation ratio of an important problem. The algorithm, however, is largely identical to the previous one, except that it keeps only a frontier of marked vertices as facilities rather than all of them. The change does make the analysis more challenging but not overly so. In any case, given that facility location is such a fundamental problem in computer science and it improves to ratio to almost tight (to this end, I'd appreciate the paper much more if the lower bound is proved to be 1/eps), it is a viable acceptance to NeurIPS.