Open Access
2013 Learning Rates for l1-Regularized Kernel Classifiers
Hongzhi Tong, Di-Rong Chen, Fenghong Yang
J. Appl. Math. 2013: 1-11 (2013). DOI: 10.1155/2013/496282

Abstract

We consider a family of classification algorithms generated from a regularization kernel scheme associated with l1-regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

Citation

Download Citation

Hongzhi Tong. Di-Rong Chen. Fenghong Yang. "Learning Rates for l1-Regularized Kernel Classifiers." J. Appl. Math. 2013 1 - 11, 2013. https://doi.org/10.1155/2013/496282

Information

Published: 2013
First available in Project Euclid: 14 March 2014

zbMATH: 06950708
MathSciNet: MR3130980
Digital Object Identifier: 10.1155/2013/496282

Rights: Copyright © 2013 Hindawi

Vol.2013 • 2013
Back to Top