The Annals of Statistics

Adaptive robust variable selection

Jianqing Fan, Yingying Fan, and Emre Barut

Full-text: Open access

Abstract

Heavy-tailed high-dimensional data are commonly encountered in various scientific fields and pose great challenges to modern statistical analysis. A natural procedure to address this problem is to use penalized quantile regression with weighted $L_{1}$-penalty, called weighted robust Lasso (WR-Lasso), in which weights are introduced to ameliorate the bias problem induced by the $L_{1}$-penalty. In the ultra-high dimensional setting, where the dimensionality can grow exponentially with the sample size, we investigate the model selection oracle property and establish the asymptotic normality of the WR-Lasso. We show that only mild conditions on the model error distribution are needed. Our theoretical results also reveal that adaptive choice of the weight vector is essential for the WR-Lasso to enjoy these nice asymptotic properties. To make the WR-Lasso practically feasible, we propose a two-step procedure, called adaptive robust Lasso (AR-Lasso), in which the weight vector in the second step is constructed based on the $L_{1}$-penalized quantile regression estimate from the first step. This two-step procedure is justified theoretically to possess the oracle property and the asymptotic normality. Numerical studies demonstrate the favorable finite-sample performance of the AR-Lasso.

Article information

Source
Ann. Statist., Volume 42, Number 1 (2014), 324-351.

Dates
First available in Project Euclid: 19 March 2014

Permanent link to this document
https://projecteuclid.org/euclid.aos/1395234980

Digital Object Identifier
doi:10.1214/13-AOS1191

Mathematical Reviews number (MathSciNet)
MR3189488

Zentralblatt MATH identifier
1296.62144

Subjects
Primary: 62J07: Ridge regression; shrinkage estimators
Secondary: 62H12: Estimation

Keywords
Adaptive weighted $L_{1}$ high dimensions oracle properties robust regularization

Citation

Fan, Jianqing; Fan, Yingying; Barut, Emre. Adaptive robust variable selection. Ann. Statist. 42 (2014), no. 1, 324--351. doi:10.1214/13-AOS1191. https://projecteuclid.org/euclid.aos/1395234980


Export citation

References

  • Belloni, A. and Chernozhukov, V. (2011). $\ell_1$-penalized quantile regression in high-dimensional sparse models. Ann. Statist. 39 82–130.
  • Bickel, P. J. and Li, B. (2006). Regularization in statistics. TEST 15 271–344. With comments and a rejoinder by the authors.
  • Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous analysis of Lasso and Dantzig selector. Ann. Statist. 37 1705–1732.
  • Bradic, J., Fan, J. and Wang, W. (2011). Penalized composite quasi-likelihood for ultrahigh dimensional variable selection. J. R. Stat. Soc. Ser. B Stat. Methodol. 73 325–349.
  • Bühlmann, P. and van de Geer, S. (2011). Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, Heidelberg.
  • Candes, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when $p$ is much larger than $n$. Ann. Statist. 35 2313–2351.
  • Fan, J., Fan, Y. and Barut, E. (2014). Supplement to “Adaptive robust variable selection.” DOI:10.1214/13-AOS1191SUPP.
  • Fan, J., Fan, Y. and Lv, J. (2008). High dimensional covariance matrix estimation using a factor model. J. Econometrics 147 186–197.
  • Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360.
  • Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B Stat. Methodol. 70 849–911.
  • Fan, J. and Lv, J. (2011). Nonconcave penalized likelihood with $np$-dimensionality. IEEE Trans. Inform. Theory 57 5467–5484.
  • Fan, J. and Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. Ann. Statist. 32 928–961.
  • Li, Y. and Zhu, J. (2008). $L_1$-norm quantile regression. J. Comput. Graph. Statist. 17 163–185.
  • Lv, J. and Fan, Y. (2009). A unified approach to model selection and sparse recovery using regularized least squares. Ann. Statist. 37 3498–3528.
  • Meinshausen, N. and Bühlmann, P. (2010). Stability selection. J. R. Stat. Soc. Ser. B Stat. Methodol. 72 417–473.
  • Newey, W. K. and Powell, J. L. (1990). Efficient estimation of linear and type I censored regression models under conditional quantile restrictions. Econometric Theory 6 295–317.
  • Nolan, J. P. (2012). Stable Distributions—Models for Heavy-Tailed Data. Birkhauser, Cambridge. (In progress, Chapter 1 online at academic2.american.edu/~jpnolan).
  • Pollard, D. (1991). Asymptotics for least absolute deviation regression estimators. Econometric Theory 7 186–199.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58 267–288.
  • van de Geer, S. and Müller, P. (2012). Quasi-likelihood and/or robust estimation in high dimensions. Statist. Sci. 27 469–480.
  • Wang, L. (2013). $L_1$ penalized LAD estimator for high dimensional linear regression. J. Multivariate Anal. 120 135–151.
  • Wang, H., Li, G. and Jiang, G. (2007). Robust regression shrinkage and consistent variable selection through the LAD-Lasso. J. Bus. Econom. Statist. 25 347–355.
  • Wang, L., Wu, Y. and Li, R. (2012). Quantile regression for analyzing heterogeneity in ultra-high dimension. J. Amer. Statist. Assoc. 107 214–222.
  • Wu, Y. and Liu, Y. (2009). Variable selection in quantile regression. Statist. Sinica 19 801–817.
  • Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 38 894–942.
  • Zhao, P. and Yu, B. (2006). On model selection consistency of Lasso. J. Mach. Learn. Res. 7 2541–2563.
  • Zou, H. (2006). The adaptive Lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418–1429.
  • Zou, H. and Li, R. (2008). One-step sparse estimates in nonconcave penalized likelihood models. Ann. Statist. 36 1509–1533.
  • Zou, H. and Yuan, M. (2008). Composite quantile regression and the oracle model selection theory. Ann. Statist. 36 1108–1126.

Supplemental materials

  • Supplementary material: Supplementary material for: Adaptive robust variable selection. Due to space constraints, the proofs of Theorems 3 and 5 and the results of the real life data-set study are relegated to the supplement [Fan, Fan and Barut (2014)].