NeurIPS 2020

Calibrating CNNs for Lifelong Learning


Meta Review

The paper proposes a continual learning approach for CNN models. This is achieved through spatial and channel-wise calibration modules, one for each new task. These calibration modules are introduced between each pair of consecutive layers in the original base model. The base model is learnt on the first task, and training data from the subsequent tasks is used to learn the calibration modules. Extensive experiments show the superiority of the proposed method in terms of accuracies, with minimal computation and storage overhead. It is important to emphasize that the proposed approach requires task labels in the test phase. This is a strong requirement that needs much more clarity in the paper, along with appropriate comparisons. If we know which group of classes a test sample belongs to, we can use the corresponding version of the learnt model, which in this paper is the calibration module. A clear distinction needs to be made between continual learning methods using or not using task labels during the test phase. We strongly suggest the authors to incorporate these suggestions into their final version of the paper.