Electronic Journal of Statistics

Selection of variables and dimension reduction in high-dimensional non-parametric regression

Karine Bertin and Guillaume Lecué

Full-text: Open access


We consider a l1-penalization procedure in the non-parametric Gaussian regression model. In many concrete examples, the dimension d of the input variable X is very large (sometimes depending on the number of observations). Estimation of a β-regular regression function f cannot be faster than the slow rate n2β/(2β+d). Hopefully, in some situations, f depends only on a few numbers of the coordinates of X. In this paper, we construct two procedures. The first one selects, with high probability, these coordinates. Then, using this subset selection method, we run a local polynomial estimator (on the set of interesting coordinates) to estimate the regression function at the rate n2β/(2β+d*), where d*, the “real” dimension of the problem (exact number of variables whom f depends on), has replaced the dimension d of the design. To achieve this result, we used a l1 penalization method in this non-parametric setup.

Article information

Electron. J. Statist., Volume 2 (2008), 1224-1241.

First available in Project Euclid: 16 December 2008

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62G08: Nonparametric regression

dimension reduction high dimension LASSO


Bertin, Karine; Lecué, Guillaume. Selection of variables and dimension reduction in high-dimensional non-parametric regression. Electron. J. Statist. 2 (2008), 1224--1241. doi:10.1214/08-EJS327. https://projecteuclid.org/euclid.ejs/1229450668

Export citation


  • Audibert, J.-Y. and Tsybakov, A. (2007). Fast learning rates for plug-in classifiers under the margin condition., The Annals of Statistics, 35(2):608–633.
  • Belkin, M. and Niyogi, P. (2003). Laplacian eigenmaps for dimensionality reduction and data representation., Neural Computation, 15:1373–1396.
  • Bickel, P., Ritov, Y., and Tsybakov, A. (2008). Simultaneous analysis of lasso and dantzig selector. To appear in, Annals of Statistics.
  • Bickel, P. J. and Li, B. (2007)., Local polynomial regression on unknown manifolds, volume 54 of IMS Lecture Notes-Monograph Series. Complex Datasets and Inverse Problems: Tomography, Networks and Beyond, pages 177–186.
  • Donoho, D. L. and Grimes, C. (2003). Hessian locally linear embeddings techniques for high-dimensional data., Proc. Natl. Acad. Sci. USA, 100:5591–5596.
  • Korostelev, A. P. and Tsybakov, A. B. (1993)., Minimax Theory of Image Reconstruction, volume 82 of Lecture Notes in Statistics.
  • Lafferty, J. and Wasserman, L. (2008). Rodeo: sparse, greedy nonparametric regression., Ann. Statist., 36(1):28–63.
  • Levina, E. and Bickel, P. J. (2005)., Maximum Likelihood Estimation of Intrinsec Dimension, volume 17 of Advances in NIPS.
  • Meinshausen, N. and Yu, B. (2008). Lasso-type recovery of sparse representations for high-dimensional data. To appear in, Annals of Statistics.
  • Nemirovski, A. (2000)., Topics in Non-parametric Statistics, volume 1738 of Ecole d’été de Probabilités de Saint-Flour 1998, Lecture Notes in Mathematics. Springer, N.Y.
  • Tsybakov, A. B. (1986). Robust reconstruction of functions by the local-approximation method., Problems of Information Transmission, 22:133–146.
  • Tsybakov, A. B. (2003). Optimal rates of aggregation., Computational Learning Theory and Kernel Machines. B.Schölkopf and M.Warmuth, eds. Lecture Notes in Artificial Intelligence, 2777:303–313. Springer, Heidelberg.
  • Zhao, P. and Yu, B. (2006). On model selection consistency of lasso., Journal of Machine Learning Research, 7:2541–2563.