The Annals of Statistics
- Ann. Statist.
- Volume 38, Number 6 (2010), 3660-3695.
Sparsity in multiple kernel learning
Vladimir Koltchinskii and Ming Yuan
Abstract
The problem of multiple kernel learning based on penalized empirical risk minimization is discussed. The complexity penalty is determined jointly by the empirical L2 norms and the reproducing kernel Hilbert space (RKHS) norms induced by the kernels with a data-driven choice of regularization parameters. The main focus is on the case when the total number of kernels is large, but only a relatively small number of them is needed to represent the target function, so that the problem is sparse. The goal is to establish oracle inequalities for the excess risk of the resulting prediction rule showing that the method is adaptive both to the unknown design distribution and to the sparsity of the problem.
Article information
Source
Ann. Statist., Volume 38, Number 6 (2010), 3660-3695.
Dates
First available in Project Euclid: 30 November 2010
Permanent link to this document
https://projecteuclid.org/euclid.aos/1291126969
Digital Object Identifier
doi:10.1214/10-AOS825
Mathematical Reviews number (MathSciNet)
MR2766864
Zentralblatt MATH identifier
1204.62086
Subjects
Primary: 62G08: Nonparametric regression 62F12: Asymptotic properties of estimators
Secondary: 62J07: Ridge regression; shrinkage estimators
Keywords
High dimensionality multiple kernel learning oracle inequality reproducing kernel Hilbert spaces restricted isometry sparsity
Citation
Koltchinskii, Vladimir; Yuan, Ming. Sparsity in multiple kernel learning. Ann. Statist. 38 (2010), no. 6, 3660--3695. doi:10.1214/10-AOS825. https://projecteuclid.org/euclid.aos/1291126969