NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
This paper proposes the first channel coding schemes with both encoder and decoder designed through neural networks. The code seems to outperform the state-of-the-art coding scheme in certain scenarios, and also has good generalization property to other channels. It also suggests some training guidelines, which is crucial for obtaining a good channel code. My only concern is the complexity of the code designed in such a way. In case it ends up having exponential complexity in the block length, perhaps it is unfair to compare it to the state-of-the-art low complexity coding scheme. The right target, in that case, should be how close it gets to the performance of the maximum likelihood decoding. Nevertheless, the idea in this paper is original and the writing is clear and easy to understand. After rebuttal: In the authors' feedback, they claim that complexity analysis will be added in the final version. I look forward to reading this part. I also read the authors' response to other reviewers and felt it is reasonable. Therefore, I am in favour of acceptance.
Reviewer 2
In recent years, several papers have employed deep learning methods to decode various classes of codes (turbo codes, linear codes, polar codes). This work focuses on turbo codes, and has the more ambitious goal of providing joint training of the decoder and the encoder (which means that the resulting code will not be a turbo code in the traditional sense). The authors borrow some ideas from the turbo coding literature (e.g., interleaving) and use CNNs to design decoder and encoder (as opposed to RNNs used in several other papers). The proposed TurboAE algorithm achieves performance which is comparable to state-of-the-art codes (see Figure 1). This is quite impressive, even though the code length is quite short (i.e. 100 bits). In fairness, this is common issue as deep learning techniques tend not to scale well with the block length. The authors also consider non-AWGN channels and compare with the original turbo decoder and with DeepTurbo proposed in [26]. Overall, the paper is interesting, clear and well-written. I have the feeling that the authors emphasize a bit too much their results (which are quite good anyway), and some detailed comments are in the 'Improvements' section.
Reviewer 3
The idea of turbo AE is novel as far as I know, and it is interesting. It allows to learn a code of long length. The alternative training process presented in this paper enable us to train both an encoder and a decoder simultaneously. Unfortunately, from the experimental results, advantages (scalability, comparison to the conventional turbo codes) of the proposed scheme appears not so clear. I increased the overall score to 7 from 4.