Electronic Journal of Statistics

On the asymptotic variance of the debiased Lasso

Sara van de Geer

Full-text: Open access


We consider the high-dimensional linear regression model $Y=X\beta^{0}+\epsilon$ with Gaussian noise $\epsilon$ and Gaussian random design $X$. We assume that $\Sigma:=\mathrm{I\hskip-0.48emE}X^{T}X/n$ is non-singular and write its inverse as $\Theta :=\Sigma^{-1}$. The parameter of interest is the first component $\beta_{1}^{0}$ of $\beta^{0}$. We show that in the high-dimensional case the asymptotic variance of a debiased Lasso estimator can be smaller than $\Theta_{1,1}$. For some special such cases we establish asymptotic efficiency. The conditions include $\beta^{0}$ being sparse and the first column $\Theta_{1}$ of $\Theta$ being not sparse. These sparsity conditions depend on whether $\Sigma$ is known or not.

Article information

Electron. J. Statist., Volume 13, Number 2 (2019), 2970-3008.

Received: August 2018
First available in Project Euclid: 18 September 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62J07: Ridge regression; shrinkage estimators
Secondary: 62E20: Asymptotic distribution theory

Asymptotic efficiency asymptotic variance Cramér Rao lower bound debiasing Lasso sparsity

Creative Commons Attribution 4.0 International License.


van de Geer, Sara. On the asymptotic variance of the debiased Lasso. Electron. J. Statist. 13 (2019), no. 2, 2970--3008. doi:10.1214/19-EJS1599. https://projecteuclid.org/euclid.ejs/1568794145

Export citation


  • [1] A. Belloni, V. Chernozhukov, and K. Kato. Uniform postselection inference for LAD regression models., Biometrika, 102:77–94, 2015.
  • [2] A. Belloni, V. Chernozhukov, and Y. Wei. Post-selection inference for generalized linear models with many controls., Journal of Business & Economic Statistics, 34(4):606–619, 2016.
  • [3] P.J. Bickel, C.A.J. Klaassen, Y. Ritov, and J.A. Wellner., Efficient and Adaptive Estimation for Semiparametric Models. Johns Hopkins University Press, Baltimore, 1993.
  • [4] P.J. Bickel, Y. Ritov, and A.B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics, pages 1705–1732, 2009.
  • [5] P. Bühlmann and S. van de Geer., Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer, 2011.
  • [6] T. Cai and Z. Guo. Confidence intervals for high-dimensional linear regression: Minimax rates and adaptivity., Annals of Statistics, 45(2):615–646, 2017.
  • [7] C. Giraud., Introduction to High-Dimensional Statistics, volume 138. CRC Press, 2014.
  • [8] J. Janková and S. van de Geer. Semi-parametric efficiency bounds for high-dimensional models., Annals of Statistics, pages 2356–2359, 2018.
  • [9] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional regression., Journal of Machine Learning Research, 15(1) :2869–2909, 2014a.
  • [10] A. Javanmard and A. Montanari. Hypothesis testing in high-dimensional regression under the gaussian random design model: asymptotic theory., IEEE Transactions on Information Theory, 60(10) :6522–6554, 2014b.
  • [11] A. Javanmard and A. Montanari. Debiasing the Lasso: optimal sample size for Gaussian designs., Annals of Statistics, 46 :2593–2622, 2018.
  • [12] V. Koltchinskii., Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: Ecole d’Eté de Probabilités de Saint-Flour XXXVIII -2008, volume 38. Springer Science & Business Media, 2011.
  • [13] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection., Annals of Statistics, pages 1302–1338, 2000.
  • [14] H. Leeb and B.M. Pötscher. Model selection and inference: Facts and fiction., Econometric Theory, 21(1):21–59, 2005.
  • [15] H. Leeb and B.M. Pötscher. Sparse estimators and the oracle property, or the return of Hodges’ estimator., Journal of Econometrics, 142(1):201–211, 2008.
  • [16] Y. Plan and R. Vershynin. One-bit compressed censing by linear programming., Communications on Pure and Applied Mathematics, 66(8) :1275–1297, 2013.
  • [17] B.M. Pötscher. Confidence sets based on sparse estimators are necessarily large., Sankhyā, 71-A:1–18, 2009.
  • [18] B.M. Pötscher and H. Leeb. On the distribution of penalized maximum likelihood estimators: The LASSO, SCAD, and thresholding., Journal of Multivariate Analysis, 100(9) :2065–2082, 2009.
  • [19] B.M. Pötscher and U. Schneider. Confidence sets based on penalized maximum likelihood estimators in Gaussian regression., Electronic Journal of Statistics, 4:334–360, 2010.
  • [20] Z. Ren, T. Sun, C.-H. Zhang, and H. Zhou. Asymptotic normality and optimalities in estimation of large Gaussian graphical models., Annals of Statistics, 43:991 –1026, 2015.
  • [21] M. Rudelson and R. Vershynin. Hanson-Wright inequality and sub-Gaussian concentration., Electronic Communications in Probability, 18:1–9, 2013.
  • [22] A. Schick. On asymptotically efficient estimation in semiparametric models., Annals of Statistics, 14(3) :1139–1151, 1986.
  • [23] R. Tibshirani. Regression analysis and selection via the Lasso., Journal of the Royal Statistical Society Series B, 58:267–288, 1996.
  • [24] S. van de Geer., Estimation and Testing Under Sparsity: Ecole d’Eté de Probabilités de Saint-Flour XLV -2016. Springer Science & Business Media, 2016.
  • [25] S. van de Geer, P. Bühlmann, Y. Ritov, and R. Dezeure. On asymptotically optimal confidence regions and tests for high-dimensional models., Annals of Statistics, 42 :1166–1202, 2014.
  • [26] C.-H. Zhang and S.S. Zhang. Confidence intervals for low dimensional parameters in high dimensional linear models., Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):217–242, 2014.
  • [27] P. Zhao and B. Yu. On model selection consistency of Lasso., Journal of Machine Learning Research, 7 :2541–2567, 2006.