NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:3030
Title:Topology-Preserving Deep Image Segmentation

Reviewer 1


		
- - - - - - - - after rebuttal - - - - - - - - from reading the other reviews, I find that my concerns were similar to R3. The rebuttal clears up some missing parts on how to compute the matching and the influence of lambda. The intuition that the topological loss puts more weight on "difficult" regions is plausible. And the dilated baseline is a good addition to the experiments. Overall, I am satisfied and increase my rating under the assumption that these points will be added to the final version of the paper. - - - - - - original review - - - - - - - Overall the paper proposes and interesting loss, that is well motivated and which shows good performance on the evaluated datasets. Weaknesses: W1 - Clarity/Reproducibility. While the mathematical part of the method description is understandable and easy to follow, it is not clear how exactly the proposed loss can be implemented since many details are missing from the description. How is the optimal mapping gamma* computed? Exhaustively or with a smarter method? How is Dgm(f) computed? By sorting all probabilities and thresholding iteratively? How are small noisy components dealt with? These are important details that need to be understood to be able to reproduce the method. Since the evaluation is carried out patch-wise: how is the prediction across patches aggregated for the final full result? Further missing detail: Figure 6a is missing a unit for time. W2 - Evaluation. The hyperparameter lambda could be ablated to understand what the influence of the balance between the losses is. From the qualitative results it seems to me that it mostly makes the prediction “thicker” to keep the topology in tact. Can the same result be achieved by training with a dilated ground truth mask? Along the same lines, lambda=0 should also be evaluated to have a clean baseline of model performance without the proposed loss. The reproducibility evaluation states that clearly defined error bars are included. However I cannot find any error bars in any of the plots and the tables do not report statistics such as std. dev. The would be valuable to understand the significance of the improvement. W3 - Usefulness of the proof. I am not fully sure that there is much benefit in the proof. For example when you minimize the cross-entropy loss the topology between prediction and ground truth is also the same as prediction and ground truth will be identical in that case. In that sense it is not immediately clear what the benefit of the loss or the proof is. Further, line 40 states that the model has “guaranteed topological correctness”, which I don’t think is true - the Betti-error would be 0 in that case which it is not.

Reviewer 2


		
Post rebuttal: Thanks to the authors for their rebuttal. Although the other reviewers raised some additional concerns regarding the description of the algorithm and impact of parameters, I think the rebuttal addressed them satisfactorily, and I'll keep my current rating. Summary: This work introduces a loss that encourages topological correctness for image segmentation models. It does this by comparing the persistent homology diagrams of a predicted probability map and the corresponding ground truth. This requires finding the best mapping of the birth/death diagrams of each and computing the squared distance. This results in a differentiable loss (given a fixed matching), which can be optimized during training. Originality: This appears to be a novel application of persistent homology for a segmentation loss. I believe it's nontrivial to translate homologies to a loss that can optimized with gradient descent, and a nice step away from per-pixel or heuristic losses for preserving topology. Clarity: Overall, the paper does an excellent job introducing the basics ideas of topology and persistent homology though the text and Figures 2,3,4. The figures are particularly good at conveying the concepts. Significance: This work leads to considerable gains in metrics related to topological consistency. This should be of great interest to any segmentation tasks where per-pixel losses don't capture the structure of the task. It will be interesting to see ways this loss can be extended in future work. Some interesting directions for further exploration: 1) How well does this work if the cross entropy loss is turned off or downweighted? What is the relative importance of the two losses, and how sensitive is the performance to the relative weighting? 2) Related, suppose there is misalignment (or other errors in the geometry) between the ground truth and training images. Could the topological loss be used to be more robust to these geometry errors?

Reviewer 3


		
This paper is well-organized and well-written. The targeted research problem is well-defined and the proposed solution looks reasonable and promising. However, I have some concerns about the proposed method. 1. In the computation of the topological loss, it is critical to find the matching between the point sets of the estimated and true persistence diagrams. How to efficiently find the optimal matching is not very clear in the paper. 2. Because of the high computational cost of the persistence diagrams, the paper works on small patches. Will the topology of the combined patches be consistent with the original whole image? How to handle the boundaries between patches? 3. According to the results, the proposed method overperforms the existing methods in terms of the topology-relevant metrics but underperforms some methods in terms of the segmentation accuracy. In my opinion, the segmentation accuracy does not conflict with the topological consistency. Could the proposed method maximize the performance on both types of metrics? If not, how about a post-processing method that preserves the topology? In this way, perhaps we could obtain a segmentation result having both high accuracy and correct topology.