Open Access
August 2004 Generalization bounds for averaged classifiers
Yoav Freund, Yishay Mansour, Robert E. Schapire
Ann. Statist. 32(4): 1698-1722 (August 2004). DOI: 10.1214/009053604000000058

Abstract

We study a simple learning algorithm for binary classification. Instead of predicting with the best hypothesis in the hypothesis class, that is, the hypothesis that minimizes the training error, our algorithm predicts with a weighted average of all hypotheses, weighted exponentially with respect to their training error. We show that the prediction of this algorithm is much more stable than the prediction of an algorithm that predicts with the best hypothesis. By allowing the algorithm to abstain from predicting on some examples, we show that the predictions it makes when it does not abstain are very reliable. Finally, we show that the probability that the algorithm abstains is comparable to the generalization error of the best hypothesis in the class.

Citation

Download Citation

Yoav Freund. Yishay Mansour. Robert E. Schapire. "Generalization bounds for averaged classifiers." Ann. Statist. 32 (4) 1698 - 1722, August 2004. https://doi.org/10.1214/009053604000000058

Information

Published: August 2004
First available in Project Euclid: 4 August 2004

zbMATH: 1045.62056
MathSciNet: MR2089139
Digital Object Identifier: 10.1214/009053604000000058

Subjects:
Primary: 62C12

Keywords: averaging , Bayesian methods , ‎classification‎ , Ensemble methods , generalization bounds

Rights: Copyright © 2004 Institute of Mathematical Statistics

Vol.32 • No. 4 • August 2004
Back to Top