Open Access
2007 Generalization error for multi-class margin classification
Xiaotong Shen, Lifeng Wang
Electron. J. Statist. 1: 307-330 (2007). DOI: 10.1214/07-EJS069

Abstract

In this article, we study rates of convergence of the generalization error of multi-class margin classifiers. In particular, we develop an upper bound theory quantifying the generalization error of various large margin classifiers. The theory permits a treatment of general margin losses, convex or nonconvex, in presence or absence of a dominating class. Three main results are established. First, for any fixed margin loss, there may be a trade-off between the ideal and actual generalization performances with respect to the choice of the class of candidate decision functions, which is governed by the trade-off between the approximation and estimation errors. In fact, different margin losses lead to different ideal or actual performances in specific cases. Second, we demonstrate, in a problem of linear learning, that the convergence rate can be arbitrarily fast in the sample size n depending on the joint distribution of the input/output pair. This goes beyond the anticipated rate O(n1). Third, we establish rates of convergence of several margin classifiers in feature selection with the number of candidate variables p allowed to greatly exceed the sample size n but no faster than exp(n).

Citation

Download Citation

Xiaotong Shen. Lifeng Wang. "Generalization error for multi-class margin classification." Electron. J. Statist. 1 307 - 330, 2007. https://doi.org/10.1214/07-EJS069

Information

Published: 2007
First available in Project Euclid: 27 August 2007

zbMATH: 1320.62152
MathSciNet: MR2336036
Digital Object Identifier: 10.1214/07-EJS069

Subjects:
Primary: 62H30 , 68T10

Keywords: Convex and nonconvex losses , import vector machines , small n and large p , Sparse learning , Support vector machines , ψ-learning

Rights: Copyright © 2007 The Institute of Mathematical Statistics and the Bernoulli Society

Back to Top