The Annals of Statistics

On consistency and sparsity for sliced inverse regression in high dimensions

Qian Lin, Zhigen Zhao, and Jun S. Liu

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

We provide here a framework to analyze the phase transition phenomenon of slice inverse regression (SIR), a supervised dimension reduction technique introduced by Li [J. Amer. Statist. Assoc. 86 (1991) 316–342]. Under mild conditions, the asymptotic ratio $\rho=\lim p/n$ is the phase transition parameter and the SIR estimator is consistent if and only if $\rho=0$. When dimension $p$ is greater than $n$, we propose a diagonal thresholding screening SIR (DT-SIR) algorithm. This method provides us with an estimate of the eigenspace of $\operatorname{var}(\mathbb{E}[\boldsymbol{x}|y])$, the covariance matrix of the conditional expectation. The desired dimension reduction space is then obtained by multiplying the inverse of the covariance matrix on the eigenspace. Under certain sparsity assumptions on both the covariance matrix of predictors and the loadings of the directions, we prove the consistency of DT-SIR in estimating the dimension reduction space in high-dimensional data analysis. Extensive numerical experiments demonstrate superior performances of the proposed method in comparison to its competitors.

Article information

Source
Ann. Statist., Volume 46, Number 2 (2018), 580-610.

Dates
Received: July 2015
Revised: January 2017
First available in Project Euclid: 3 April 2018

Permanent link to this document
https://projecteuclid.org/euclid.aos/1522742430

Digital Object Identifier
doi:10.1214/17-AOS1561

Mathematical Reviews number (MathSciNet)
MR3782378

Zentralblatt MATH identifier
06870273

Subjects
Primary: 62J02: General nonlinear regression
Secondary: 62H25: Factor analysis and principal components; correspondence analysis

Keywords
Dimension reduction random matrix theory sliced inverse regression

Citation

Lin, Qian; Zhao, Zhigen; Liu, Jun S. On consistency and sparsity for sliced inverse regression in high dimensions. Ann. Statist. 46 (2018), no. 2, 580--610. doi:10.1214/17-AOS1561. https://projecteuclid.org/euclid.aos/1522742430


Export citation

References

  • Bickel, P. J. and Levina, E. (2008). Covariance regularization by thresholding. Ann. Statist. 36 2577–2604.
  • Cai, T. T., Zhang, C.-H. and Zhou, H. H. (2010). Optimal rates of convergence for covariance matrix estimation. Ann. Statist. 38 2118–2144.
  • Candes, E. and Tao, T. (2007). The Dantzig selector: Statistical estimation when $p$ is much larger than $n$. Ann. Statist. 35 2313–2351.
  • Cook, R. D. (1996). Graphics for regressions with a binary response. J. Amer. Statist. Assoc. 91 983–992.
  • Cook, R. D., Forzani, L. and Rothman, A. J. (2012). Estimating sufficient reductions of the predictors in abundant high-dimensional regressions. Ann. Statist. 40 353–384.
  • Cui, H., Li, R. and Zhong, W. (2015). Model-free feature screening for ultrahigh dimensional discriminant analysis. J. Amer. Statist. Assoc. 110 630–641.
  • Fan, J. and Lv, J. (2008). Sure independence screening for ultrahigh dimensional feature space. J. R. Stat. Soc. Ser. B. Stat. Methodol. 70 849–911.
  • Hsing, T. and Carroll, R. J. (1992). An asymptotic theory for sliced inverse regression. Ann. Statist. 20 1040–1061.
  • Jiang, B. and Liu, J. S. (2014). Variable selection for general index models via sliced inverse regression. Ann. Statist. 42 1751–1786.
  • Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsity for principal components analysis in high dimensions. J. Amer. Statist. Assoc. 104 682–693.
  • Li, K.-C. (1991). Sliced inverse regression for dimension reduction. J. Amer. Statist. Assoc. 86 316–342.
  • Li, L. (2007). Sparse sufficient dimension reduction. Biometrika 94 603–613.
  • Li, L. and Nachtsheim, C. J. (2006). Sparse sliced inverse regression. Technometrics 48 503–510.
  • Lin, Q., Zhao, Z. and Liu, J. S. (2018). Supplement to “On consistency and sparsity for sliced inverse regression in high dimensions.” DOI:10.1214/17-AOS1561SUPP.
  • Luo, X., Stefanski, L. A. and Boos, D. D. (2006). Tuning variable selection procedures by adding noise. Technometrics 48 165–175.
  • Neykov, M., Lin, Q. and Liu, J. S. (2015). Signed support recovery for single index models in high-dimensions. Ann. Math. Sci. Appl. 1 379–426. DOI:10.4310/AMSA.2016.v1.n2.a5.
  • Székely, G. J., Rizzo, M. L. and Bakirov, N. K. (2007). Measuring and testing dependence by correlation of distances. Ann. Statist. 35 2769–2794.
  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B. Stat. Methodol. 58 267–288.
  • Vershynin, R. (2012). Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing 210–268. Cambridge Univ. Press, Cambridge.
  • Wu, Y., Boos, D. D. and Stefanski, L. A. (2007). Controlling variable selection by the addition of pseudovariables. J. Amer. Statist. Assoc. 102 235–243.
  • Yu, Z., Dong, Y. and Zhu, L.-X. (2016). Trace pursuit: A general framework for model-free variable selection. J. Amer. Statist. Assoc. 111 813–821.
  • Yu, Z., Zhu, L., Peng, H. and Zhu, L. (2013). Dimension reduction and predictor selection in semiparametric models. Biometrika 100 641–654.
  • Zhong, W., Zhang, T., Zhu, Y. and Liu, J. S. (2012). Correlation pursuit: Forward stepwise variable selection for index models. J. R. Stat. Soc. Ser. B. Stat. Methodol. 74 849–870.
  • Zhu, L.-X. and Fang, K.-T. (1996). Asymptotics for kernel estimate of sliced inverse regression. Ann. Statist. 24 1053–1068.
  • Zhu, L., Miao, B. and Peng, H. (2006). On sliced inverse regression with high-dimensional covariates. J. Amer. Statist. Assoc. 101 630–643.
  • Zhu, L. X. and Ng, K. W. (1995). Asymptotics of sliced inverse regression. Statist. Sinica 5 727–736.
  • Zhu, L.-P., Li, L., Li, R. and Zhu, L.-X. (2011). Model-free feature screening for ultrahigh-dimensional data. J. Amer. Statist. Assoc. 106 1464–1475.
  • Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B. Stat. Methodol. 67 301–320.

Supplemental materials

  • Supplement to “On the consistency and sparsity for sliced inverse regression for high dimensions”. In the supplement, we prove the rest of the results stated in the paper.