NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8215
Title:Deep Multi-State Dynamic Recurrent Neural Networks Operating on Wavelet Based Neural Features for Robust Brain Machine Interfaces


		
This paper presents a deep recurrent network for decoding neural signals from the brain of a human participant for the control of a computer cursor. All reviewers thought this was an important problem and appreciated the large-scale comparison against other decoders on a pre-recorded dataset. Reviewer 1 thought the paper was of impressive quality and appreciated the experimental rigor and many aspects that were empirically evaluated. They also thought the paper was well written, but asked for more clarification regarding novelty. Reviewer 2 acknowledged the good results, but questioned the nature of the hardware problem. They asked for comparison to standard approaches. Reviewer 3 had similar concerns to Reviewer 1 in terms of architectural novelty, but like the other reviewers thought the application was significant; particularly towards the clinical translation of BMI systems. They noted that performance on offline pre-recorded data had not always generalize to performance with a decoder under closed-loop control. The authors clarified the uniqueness of their architecture in their feedback. They addressed Reviewer 2’s comments about the hardware problem and confirmed that they had compared to many standard approaches. Following some discussion in the reviewing forum, all reviewers are in favor of accepting the paper. I recommend acceptance. Here is some additional feedback for the authors concerning presentation, based on reviewing this paper with the SAC: - The statement "all hyper-parameters are learnable in our DRNN" is referring to parameters, not hyper-parameters. - Use math mode symbols for operations like \tanh and avoid writing text in math mode (e.g. \text{Scheduled sampling}). Use a single letter index for the epoch counter (rather than ep) and single letter or symbol to represent the total number of epochs (rather than epochs). - Figure 1 is nearly unreadable because it is so compressed (as is the text on the x-axis of Figure 2). Also, Figures 5a, 6a, 7a, 8, and 9 might be unreadable to people with color blindness. - There is a mix of teletype font and roman font text in Algorithm 1. Make the font consistent. Some of the teletype seems to refer to variables (e.g. "number of batches") but other teletype is not (e.g. "Update weights and biases").