Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Charles Micchelli, Massimiliano Pontil
This paper provides a foundation for multitask learning using reproducing ker- nel Hilbert spaces of vectorvalued functions. In this setting, the kernel is a matrixvalued function. Some explicit examples will be described which go be- yond our earlier results in . In particular, we characterize classes of matrix valued kernels which are linear and are of the dot product or the translation invari- ant type. We discuss how these kernels can be used to model relations between the tasks and present linear multitask learning algorithms. Finally, we present a novel proof of the representer theorem for a minimizer of a regularization func- tional which is based on the notion of minimal norm interpolation.