NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5299
Title:SHE: A Fast and Accurate Deep Neural Network for Encrypted Data

Reviewer 1


		
Main contribution: The paper shows how to implement an accurate homomorphic ReLU and homomorphic max-pooling operation. This is achieved by combining the idea of logarithmic quantization followed by shifting and adding operations with the basic approach of TFHE (Fast Fully Momorphic Encryption over the Torus). [BTW the abbreviation for TFHE is never expanded and explained properly in the main paper]. Further they also note that 5 bit representations are sufficient for weights, but the intermediate results of accumulation need 16 bit representation to avoid degrading accuracy. Thus they propose mixed bitwidth accumulators to avoid unnecessary computational costs. By using these few key ideas the authors show how TFHE can now support fast matrix multiplications and convolutions which previously were extremely slow. Indeed the authors note that these kinds of operations were severely slowing down the execution of the previous state of the art method, FCN. Taken together their ideas dramatically speeded up computation while improving DNN accuracy, so this is a significant achievement. For example, previously, all the available methods including FCN used to take hundreds of seconds to process a single encryopted MNIST digit image through a very simplified 5 layer NN, and their accuracy was also very poor since they could not handle ReLUs or other such nonlinearities except through lousy and slow approximations. The improvements are so significant that the present paper/method is the first to be able to process ImageNet images or even run deep neural nets inference on homomorphically encrypted images. Further accuracy is also substantially better than previously available methods. In this sense the improvement over the current state of the art is very significant. Major disclaimer: This reviewer has not worked on cryptography and is not by any means an expert on privacy preserving deep neural nets. I would defer to experts to be able to validate that the improvement over the state of the art is truly this huge.

Reviewer 2


		
In my opinion the paper has minimal novelty in terms of techniques used and much of it is "obvious" to the homomorphic encryption community. The main contribution is using the TFHE scheme to get around the challenges of other arithmetic circuit based schemes. While this approach indeed works as the authors hope it to work, it comes with a massive cost in terms of message expansion. It seems unrealistic to use hundreds of megabytes, or even gigabytes, to store a single sample to classify. Due to the large size, this technique seems useless in an online setting. Overall I like the presentation of the paper. The performance analysis is quite detailed and compares to multiple prior works. I would think that these tables and benchmarks will be used in the future as reference points. However, I would like to point out that the comparisons are not exactly fair. Different papers and implementations use different security level, and some optimize for throughput rather than latency. It could be good to make this very clear in the paper.

Reviewer 3


		
strengths: -introduces fast new implementation for homomorphic encrypted neural networks -first paper to implement LSTMs with homomorphic encryption -good comparison with previous works. weaknesses: -no consideration for approximate number schemes in related work. -no support for float numbers. -At many points in the paper, it is not clear if unecrypted model is a model with PAA or a model with ReLU activation. -what is TCN? the abbreviation is explained way too late into the paper -Tables in chapter 5 are overloaded and abbreviations used are not explained properly. -Figure 3a does not highlight that the shift operation is cheap. - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70%