Minimizing Quadratic Functions in Constant Time

Part of Advances in Neural Information Processing Systems 29 (NIPS 2016)

Bibtex Metadata Paper Reviews

Authors

Kohei Hayashi, Yuichi Yoshida

Abstract

A sampling-based optimization method for quadratic functions is proposed. Our method approximately solves the following $n$-dimensional quadratic minimization problem in constant time, which is independent of $n$: $z^*=\min_{\bv \in \bbR^n}\bracket{\bv}{A \bv} + n\bracket{\bv}{\diag(\bd)\bv} + n\bracket{\bb}{\bv}$, where $A \in \bbR^{n \times n}$ is a matrix and $\bd,\bb \in \bbR^n$ are vectors. Our theoretical analysis specifies the number of samples $k(\delta, \epsilon)$ such that the approximated solution $z$ satisfies $|z - z^*| = O(\epsilon n^2)$ with probability $1-\delta$. The empirical performance (accuracy and runtime) is positively confirmed by numerical experiments.