The paper is proposing a continual learning method which basically learns each method in a separate orthogonal subspace. Since the updates for one task is in the null space of others, it does not negatively interfere them. The authors provide an extensive empirical results. All reviewers raised significant issues and authors answered them in their rebuttal. After the discussion phase, some issues were handled and some was still not satisfying to the reviewers. Specifically, the remaining major issues are: - R#1 finds the explanation lacking and ask for more intuitive explanation of the method. - R#4 find the fixed number of tasks (fixed T) assumption unacceptable, authors provide a possible solution but R#4 suggests it needs to be tested before acceptance. - Other reviewers are satisfied and gives accept scores. Since the scores are diverging, I carefully read the paper. Although I believe authors can definitely provide more intuition, it is not a reason for rejection. Current writing is clear enough to be accepted, and authors can improve it by the camera ready deadline. The fixed T issue is a real one; but, I personally find it rather acceptable as the proposed method is novel and definitely works well. Continual learning is still an evolving and new area, and such limitations can definitely be fixed in the future work. Moreover, authors also implement the proposed solution for dynamic T and test it. Authors shared the results and as an AC, I am satisfied. Hence, I would like to recommend acceptance. In the mean time, the following major issues need to be handled by the camera ready deadline: - Section 2 should be made significantly smaller since it includes some straightforward information which can go to appendix. The remaining space should be used to improve intuitive explanations as well as the new experimental results. - Dynamic T experiment is definitely interesting, should be completed and reported in the camera ready version.