NeurIPS 2020

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization

Meta Review

The paper provides a different way of thinking about regularizing neural networks in continual learning. Their idea stems from network compression. The authors have explained the motivation and the method is fairly justified, and none of the prior work has done it in a similar way as proposed here. The authors also addressed most of the issues raised by the reviewers in their rebuttal.