Active Learning Ranking from Pairwise Preferences with Almost Optimal Query Complexity

Part of Advances in Neural Information Processing Systems 24 (NIPS 2011)

Bibtex Metadata Paper

Authors

Nir Ailon

Abstract

Given a set $V$ of $n$ elements we wish to linearly order them using pairwise preference labels which may be non-transitive (due to irrationality or arbitrary noise). The goal is to linearly order the elements while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The number of disagreements (loss) and the query complexity (number of pairwise preference labels). Our algorithm adaptively queries at most $O(n\poly(\log n,\eps^{-1}))$ preference labels for a regret of $\eps$ times the optimal loss. This is strictly better, and often significantly better than what non-adaptive sampling could achieve. Our main result helps settle an open problem posed by learning-to-rank (from pairwise information) theoreticians and practitioners: What is a provably correct way to sample preference labels?