Attribute-efficient learning of decision lists and linear threshold functions under unconcentrated distributions

Part of Advances in Neural Information Processing Systems 19 (NIPS 2006)

Bibtex Metadata Paper

Authors

Philip Long, Rocco Servedio

Abstract

We consider the well-studied problem of learning decision lists using few examples when many irrelevant features are present. We show that smooth boosting algorithms such as MadaBoost can efficiently learn decision lists of length k over n boolean variables using poly(k , log n) many examples provided that the marginal distribution over the relevant variables is "not too concentrated" in an L 2 -norm sense. Using a recent result of Hastad, we extend the analysis to obtain a similar (though quantitatively weaker) result for learning arbitrary linear threshold functions with k nonzero coefficients. Experimental results indicate that the use of a smooth boosting algorithm, which plays a crucial role in our analysis, has an impact on the actual performance of the algorithm.