Part of Advances in Neural Information Processing Systems 14 (NIPS 2001)
Mário Figueiredo
In this paper we introduce a new sparseness inducing prior which does not involve any (hy- per)parameters that need to be adjusted or estimated. Although other applications are possi- ble, we focus here on supervised learning problems: regression and classification. Experi- ments with several publicly available benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, both in terms of error rates and sparseness, although it involves no tuning or adjusting of sparseness- controlling hyper-parameters.