An Analysis of Convex Relaxations for MAP Estimation

Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)

Bibtex Metadata Paper Supplemental

Authors

Pawan Mudigonda, Vladimir Kolmogorov, Philip Torr

Abstract

The problem of obtaining the maximum a posteriori estimate of a general discrete random field (i.e. a random field defined using a finite and discrete set of labels) is known to be N P-hard. However, due to its central importance in many applications, several approximate algorithms have been proposed in the literature. In this paper, we present an analysis of three such algorithms based on convex relaxations: (i) L P - S: the linear programming (L P) relaxation proposed by Schlesinger [20] for a special case and independently in [4, 12, 23] for the general case; (ii) Q P - R L: the quadratic programming (Q P) relaxation by Ravikumar and Lafferty [18]; and (iii) S O C P - M S: the second order cone programming (S O C P) relaxation first proposed by Muramatsu and Suzuki [16] for two label problems and later extended in [14] for a general label set. We show that the S O C P - M S and the Q P - R L relaxations are equivalent. Furthermore, we prove that despite the flexibility in the form of the constraints/objective function offered by Q P and S O C P, the L P - S relaxation strictly dominates (i.e. provides a better approximation than) Q P - R L and S O C P - M S. We generalize these results by defining a large class of S O C P (and equivalent Q P) relaxations which is dominated by the L P - S relaxation. Based on these results we propose some novel S O C P relaxations which strictly dominate the previous approaches.