NIPS Proceedingsβ

Submodular Maximization via Gradient Ascent: The Case of Deep Submodular Functions

Part of: Advances in Neural Information Processing Systems 31 (NIPS 2018) pre-proceedings

[PDF] [BibTeX] [Supplemental]

Authors

Conference Event Type: Poster

Abstract

We study the problem of maximizing deep submodular functions (DSFs) subject to a matroid constraint. DSFs are an expressive class of submodular functions that include, as strict subfamilies, the facility location, weighted coverage, and sums of concave composed with modular functions. We use a strategy similar to the continuous greedy approach, but we show that the multilinear extension of any DSF has a natural and computationally attainable concave relaxation that we can optimize using gradient ascent. Our results show a guarantee of $\max_{0<\delta<1}(1-\epsilon-\delta-e^{-\delta^2\Omega(k)})$ with a running time of $O(\nicefrac{n^2}{\epsilon^2})$ plus time for pipage rounding to recover a discrete solution, where $k$ is the rank of the matroid constraint. This bound is often better than the standard $1-1/e$ guarantee of the continuous greedy algorithm, but runs much faster. Our bound also holds even for fully curved ($c=1$) functions where the guarantee of $1-c/e$ degenerates to $1-1/e$ where $c$ is the curvature of $f$. We perform computational experiments that support our theoretical results.