NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:7511
Title:Graph Normalizing Flows

Reviewer 1


		
Originality: The task is not new. But the paper did combined the normalizing flow to graph neural network in a new way. Related works are cited. Quality: The paper showed many results to support their model. They compared their method with several existing method and showed that their method is either better or as good as other methods. However, the results for the generative task seems to suggest that their method is not comparable to graphRNN on at least one task(community-small) and they didn't try to analysis that. Also they claimed that their method has better memory usage but didn't provide any quantitive results to show that. Clarity: The paper is well organized. Related works are introduced in detail. Significance: Good but results are on small datasets. To have better impact and to really support their memory advantage claim they need results for large graph.

Reviewer 2


		
Significance: 1. The paper is well written and easy to follow. All experimental details are provided along with the code. 2. As per my knowledge, this is the first work which replaces non-linear functions in reversible normalizing flow model with GNN model. This simple change led to development of reversible model for graph processing. 3. On the generation side - Although the computational cost is same as GraphRNN O(N^2), inference model for graph generation can be parallelized. 4. Drawback - Graph generation is shown on very small dataset with max of 20 nodes. It would be better to compare against standard baselines as reported in GraphRNN work such as Grid, Protein and medium sized community / ego graphs. 5. Moreover, the results in Table 1 & 4 is not overwhelming. Clarification: 1. On optimizing using Eq.(5), there is a good possibility that the loss function is dominated by large chunk of non-edge terms. Have you considered balancing the loss for positive and negative edges ?

Reviewer 3


		
The paper introduced a graph formulation of normalizing flows. The model is clearly presented and justified. The numerical experiments are less convincing than the presentation of the model. The results are marginally better than the baselines. The baselines are also not the SOTA techniques as for example the QM9 molecule regression results or semi-supervised graph clustering techniques. An advantage of normalizing flows is to reduce the memory footprints of GNNs, but this is not illustrated in any large graph problems. Also, what is the speed/complexity of GNFs compared to GNNs?

Reviewer 4


		
The paper is novel on bridging fields between normalizing flow and graph neural networks, but all the sub-parts are not novel. This work provides diverse experiments. However, every task is very simple and no significant performance boost on any of them. The improvements on the supervised tasks are very marginal compared to vanilla GNN. Over simplified dataset on Density estimation which is an important application. No experiments on other density estimation benchmarks. Graph Auto-Encoders part is not core contributions, but some engineering implementation.