NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
Summary: Conditional Independence Testing is an important part of causal structure learning algorithms. However, in the most general case either one has to do a lot of conditional independence tests and/or test by conditioning on a very large number of variables. This work proposes using at most two CI tests per candidate parent involving exactly at most one conditioning variable to filter the real parents of a response variable under certain conditions. This work is interested in identifying direct causes of a Response variable from amongst a set of a candidate parent variables {M_i}. Response variable does not have any observed descendants. Suppose, each candidate parent has a known parent P_i and suppose, there is no causal path from R to M's or from M variables to P variables, authors show that by doing two conditional independence tests per candidate parent, false positives can be made to be 0. That is if a perfect CI oracle is used the declared parents are always real under causal faithfulness. I checked the proofs. They are correct to the best of my knowledge. In fact, the results also hold under common latents between P and the M variables. Authors essentially say that through EEG cortical activity in certain areas of the brain can be observed during planning, doing action where action is movement of the limb. Response is the time it takes to move the limb within a certain duration. Special EEG data is collected where patients are asked to plan to move, then attempt to move within a certain duration. So the plan stage cortical activity would be the parent (P_i) of the cortical activity during movement (M_i) and it is the same variable across time. The authors argue that their data fits the model they propose and hence their tests have very low false positive rates. They conduct tests where a candidate brain feature is log power of the cortical activity in various frequency bands at various positions. Authors show that for patients that are not able to move limbs well, certain frequencies in certain areas become causal factors and in people with good motor actions, certain other frequencies become causal factors. They provide details of actual experiments done with patients and corroborate with neuroscience experiments. Originality: The notion of doing efficient CI tests with appropriate assumptions accompanied with the EEG experiment and final results corroborating with neuroscience literature makes it very original. Significance: Goes without saying that this would inspire neuroscience community to adopt such techniques directly inspired by CI testing and causality with DAG models. Further, it is a great work to follow up even within the causality community. Clarity: The experimental setup, the mapping to their model with P,M and R variables. Justification of using non-i.i.d data with time as latent variables and showing that it does not affect their conclusions are all very clear. Quality: Definitely very good quality empirics combined with some very cute theoretical observation.
Reviewer 2
The paper proposes a constraint-based method for selecting causal features for a given target variable in an neuroimaging context. Theoretical results are provided which result in an algorithm that requires only one independence test with one conditioning variable per possible causal feature and a weaker notion of faithfulness. Empirical results on simulated data show low false positive rates and relatively low false negatives for small to medium numbers of variables. Empirical results on real data area encouraging. In general the paper is well written though the section on empirical results on real data could be improved to be more clear. The method presented is novel and sound. This may be significant as causal discovery from neuroimaging data is currently a popular area of study. However, it is somewhat difficult to determine how strong these results are since they are not compared to any baseline. It would be helpful if the paper spoke more about the fact that the conditions are only sufficient, but not necessary and the likely degree to which it may fail to identify causes (either characterize this theoretically or show empirically). The false negative rates do not appear to converge in the simulations. It would be interesting to know if they do (eventually) if the samples are increased or if this is a result of the fact that the method fails to identify some causes and the degree to which this increases with n.
Reviewer 3
My main concern for the paper was the lack of comparison with other methods. The authors explained in their response that the reason was that the dataset was collected anew, and they are going to release the data. The conclusion drew from the analysis is in accordance with the general consensus on the neuroscience problem, which serves as a weak validation of the method. One reviewer does point out that the authors made some efforts to compare their methods with existing Lasso variants on synthetic datasets. Thus I am inclined to raise my score to a weak accept It will be great if the authors can be more clear on the advantages of their methods, comparing to other methods. In my opinion, I think it is better explained in the response than in the paper. ---------------------Before Rebuttal--------- This paper presents a series of simplifying assumptions for causal inference and proves that only a single conditional independence test per candidate cause is required in such a regime. However, it is not very clear how the assumptions required for the algorithms are relaxed compared to existing algorithms -- only causal sufficiency is removed and only the PC method requires this assumption. The major drawbacks of this paper are due to the lack of comparison with existing methods. It’s not very clear whether under the constrained conditions if other methods are applicable. In Figure 2, it appears that the large number of false negatives are okay, however this would only make sense if the method was only designed to capture direct causes. However, the method should capture both. It would be nice to see an explanation of why these false negatives appear. Figure 1 is rather small, but important for understanding the type of DAGs assumed by this method. It should be made larger.