NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
Originality: - The paper references prior work and the authors note their approach differs from previous work that assumes fixed decoding models. Authors should include a brief summary of how motor imagery BCIs operate (note that there are a variety of BCI control signals, motor-imagery is one of them (line 17). - Authors modify a previously developed Bayesian algorithm to probabilistically weight candidate models and account for non-stationarities in EEG data. (Authors should revise their claim of “novelty” in main contributions, line 49 to state how they build on the previous Liu approach). Authors propose enhancements to the model by adapting model parameters based on tracking functional changes in neural signals and noisy neurons. Quality: - Overall, the technical content appears mostly correct, some information is missing. Sections 2.2 and 2.3 discuss previously developed algorithms/methods. o Line 86: Under what instances is low \alpha useful? o Line 97: What is w^i_{k-1} and how is it computed? o Authors should give range of parameters. - Technical details are missing in section 2.4. o What is the function form of the observation function, h_{ℋ_k}(x_k), and how are the models trained? o Authors should clarify if drop-out + perturbation steps are performed during model training or during online decoding. o Clarify the criterion used for neuron dropout. Authors state they “randomly disconnect model candidates” (line 127), but later state they “select 20 neurons with high MI values for movement decoding.” (lines 170-171). How is each neuron connected to a candidate model? Based on the information in Figure 2a, what neurons are dropped from the model? o Clarify how perturbation weights are updated. Using (10) or from data “based on the range that model candidates can deviate from the mean” (line 250-251). o Similar to what is provided in Algorithm 1, authors should include details on how decoding is performed (with steps described in 2.4). - Authors include results from simulated data and neural signal data to demonstrate the robust of their approach to dynamic changes by artificially inducing noise. Simulated experiment appears to be a replication of a previous study (Liu 2017) and does not demonstrate impact of model modifications. EEG experiments are well-designed. Performance is compared with various algorithms and results from empirical data indicate the weighted model is generally more robust under noisy conditions. A sensitivity analysis is also included to assess the impact of model parameters on performance. To demonstrate utility of the model enhancements, authors should include results without neuron dropout and/or perturbation. Clarity: - Overall, the paper is well-written and organised, and the technical content is well explained. A few areas need clarity (section 2.4 noted above). o Lines 43-45: “In online prediction, it adaptively adjusts the measurement function along with changes in neural signals by dynamically selects and combines these models according to Bayesian updating mechanism.” o Explain “spike sorting” (Line 168) o Use different nomenclature for model size (number of neurons/model) and model number (candidate models) as it can be confusing. o What does X in DyEnsemble(X) refer to? o Define notations earlier in text (e.g., “s” in line 179), instead of much later in section 4.4. o Table 1: Correlation coefficient between what? - Figure captions need to more descriptive so the reader does not rely on information embedded in the text to interpret the figures. Significance: - Paper is a useful contribution to the BCI community as it uses a weighted Bayesian model to address the problem of nonstationarities in neural signal data, in contrast to conventional methods that assume a fixed decoding model. Empirical results using EEG data demonstrate potential robustness to noisy conditions. - Algorithmic contributions are modifications to a previously developed Bayesian model. Although authors introduce modifications (neuron dropout and weight perturbation) to enhance model robustness, additional results are needed to assess the utility of the proposed enhancements. Post-rebuttal: Read authors' feedback and appreciated their responses to the main critiques, particularly novelty (slight modification (3) of Liu et al. (2017) to account for non-stationarities) and addition of neuron dropout (ND) + weight perturbation (WP), which should be stated in a revised manuscript. More impact if they would have demonstrated failure of Liu's approach with nonstationarities and a more robust simulation analysis (with realistic noise types). Authors' provided new results demonstrating utility of ND + WP in the neural experiment and comparison with a dual decoder. Based on feedback, revising initial review and score upward.
Reviewer 2
Originality: the methods per se are not new, but their combination is new, thus resulting in a new model. The authors describe that the state of the art is kalman filters to estimate trajectories from neural signals. KF is not a proper model for changing signals. However, I miss a specific state of the art description of models for non-stationary signals. I performed a very fast search of possible related papers and this topic has been previously addressed. Because these are not reported, comparison to other methods is not performed. Quality: i like the approach proposed by the authors because it can deal with noise as well as changing discriminative neurones. Their results support this claim. Clarity: this paper is clearly written. Significance: this approach is complete because it deals with noise and changing discriminative signals. Significance is difficult to evaluate due to lack of comparison to other techniques dealing with the same problem.
Reviewer 3
This is an exciting paper! The non-stationarity problem has plagued the iBCI field for a long time, and there's only been one other paper that I'm aware of that has attempted to address it in a non-heuristic way. Despite the shortcomings of this work, for the subcommunity that works with this type of data, this type of thinking is a big step forward, and I recommend publication. For the results, 20 neurons are available for decoding and DyEnsemble's candidates use either 15 or 18 neurons for decoding. With 18 neurons/candidate the model can ignore up to 2 noisy neurons, and with 15 neurons/candidate the model can ignore up to 5 noisy neurons. Unfortunately, baseline performance declines significantly when including fewer neurons per candidate, giving a tradeoff between noise robustness and baseline performance. Fortunately, I don't think this particular problem would pose as much of an issue in practice because modern electrode arrays have hundreds of channels or more - so even if we just take a subset of the neurons we would still have enough neurons for good performance in the large electrode arrays typical of clinical applications. Unfortunately, as the number of channels grow, the likelihood of having all noisy neurons excluded by a given randomly generated candidate model declines. So while this work improves robustness in a compelling way, there's still more work to be done. Update: After reviewing other reviewers' comments and the author feedback, am continuing to recommend acceptance.