NeurIPS 2020

Functional Regularization for Representation Learning: A Unified Theoretical Perspective


Meta Review

This paper presents a unified framework for analyzing representational learning approaches that make use of unlabeled data for performing auxiliary tasks such as auto-encoders and masked self-supervision. The provided sample complexity bounds show that the auxiliary task provides a functional regularization that can prune the hypothesis space to reduce significantly the number of labeled examples sufficient for learning. The theory is confirmed experimentally on synthetic data. As I understand it, this work is the first to present a unified and natural framework to analyze the impact of unsupervised auxiliary tasks on generalization. Consequently, the novelty of the formulation and its applicability to algorithmic approaches of broad interest to practitioners outweighed the fact that some reviewers saw the technical contributions as rather straightforward. Finally, as suggested by a reviewer, we think that the authors should provide more discussion on the cases where the use of auxiliary tasks will not help (as it was done in the rebuttal).