Active Learning of Multi-Index Function Models

Part of Advances in Neural Information Processing Systems 25 (NIPS 2012)

Bibtex Metadata Paper

Authors

Tyagi Hemant, Volkan Cevher

Abstract

We consider the problem of actively learning \textit{multi-index} functions of the form $f(\vecx) = g(\matA\vecx)= \sum_{i=1}^k g_i(\veca_i^T\vecx)$ from point evaluations of $f$. We assume that the function $f$ is defined on an $\ell_2$-ball in $\Real^d$, $g$ is twice continuously differentiable almost everywhere, and $\matA \in \mathbb{R}^{k \times d}$ is a rank $k$ matrix, where $k \ll d$. We propose a randomized, active sampling scheme for estimating such functions with uniform approximation guarantees. Our theoretical developments leverage recent techniques from low rank matrix recovery, which enables us to derive an estimator of the function $f$ along with sample complexity bounds. We also characterize the noise robustness of the scheme, and provide empirical evidence that the high-dimensional scaling of our sample complexity bounds are quite accurate.