Electronic Communications in Probability

A note on the richness of convex hulls of VC classes

Gábor Lugosi, Shahar Mendelson, and Vladimir Koltchinskii

Full-text: Open access


We prove the existence of a class $A$ of subsets of $\mathbb{R}^d$ of VC dimension 1 such that the symmetric convex hull $F$ of the class of characteristic functions of sets in $A$ is rich in the following sense. For any absolutely continuous probability measure $\mu$ on $\mathbb{R}^d$, measurable set $B$ and $\varepsilon \gt 0$, there exists a function $f$ in $F$ such that the measure of the symmetric difference of $B$ and the set where $f$ is positive is less than $\varepsilon$. The question was motivated by the investigation of the theoretical properties of certain algorithms in machine learning.

Article information

Electron. Commun. Probab., Volume 8 (2003), paper no. 18, 167-169.

Accepted: 17 December 2003
First available in Project Euclid: 18 May 2016

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62G08: Nonparametric regression 68Q32: Computational learning theory [See also 68T05]

This work is licensed under aCreative Commons Attribution 3.0 License.


Lugosi, Gábor; Mendelson, Shahar; Koltchinskii, Vladimir. A note on the richness of convex hulls of VC classes. Electron. Commun. Probab. 8 (2003), paper no. 18, 167--169. doi:10.1214/ECP.v8-1097. https://projecteuclid.org/euclid.ecp/1463608902

Export citation


  • Blanchard, G.; Lugosi, G.; Vayatis, N. On the rate of convergence of regularized boosting classifiers. newblock Journal of Machine Learning Research, 4 (2003), 861-894.
  • Breiman, L. Bagging predictors. Machine Learning, 24 (1996), 123–140.
  • Breiman, L. Bias, variance, and arcing classifiers. newblock Technical report, Department of Statistics, University of California at Berkeley, Report 460, 1996.
  • Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Systems 2 (1989), no. 4, 303–314.
  • Freund, Yoav. Boosting a weak learning algorithm by majority. Inform. and Comput. 121 (1995), no. 2, 256–285.
  • Hornik, K.; Stinchcombe, M.; White, H. Multi-layer feedforward networks are universal approximators. Neural Networks, 2 (1989), 359–366.
  • Koltchinskii, V.; Panchenko, D. Empirical margin distributions and bounding the generalization error of combined classifiers. Ann. Statist. 30 (2002), no. 1, 1–50.
  • Lugosi, G.; Vayatis, N. On the bayes-risk consistency of regularized boosting methods. Annals of Statistics, 2003, to appear.
  • Royden, H. L. Real analysis. Third edition. Macmillan Publishing Company, New York, 1988. xx+444 pp. ISBN: 0-02-404151-3.
  • Schapire, R. E. The strength of weak learnability. Machine Learning, 5 (1990), 197–227.