Electronic Journal of Statistics

Low rank multivariate regression

Christophe Giraud

Full-text: Open access


We consider in this paper the multivariate regression problem, when the target regression matrix A is close to a low rank matrix. Our primary interest is in on the practical case where the variance of the noise is unknown. Our main contribution is to propose in this setting a criterion to select among a family of low rank estimators and prove a non-asymptotic oracle inequality for the resulting estimator. We also investigate the easier case where the variance of the noise is known and outline that the penalties appearing in our criterions are minimal (in some sense). These penalties involve the expected value of Ky-Fan norms of some random matrices. These quantities can be evaluated easily in practice and upper-bounds can be derived from recent results in random matrix theory.

Article information

Electron. J. Statist., Volume 5 (2011), 775-799.

First available in Project Euclid: 8 August 2011

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62H99: None of the above, but in this section 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52) 62J05: Linear regression

Multivariate regression random matrix Ky-Fan norms estimator selection


Giraud, Christophe. Low rank multivariate regression. Electron. J. Statist. 5 (2011), 775--799. doi:10.1214/11-EJS625. https://projecteuclid.org/euclid.ejs/1312818918

Export citation


  • [1] T.W. Anderson. Estimating linear restrictions on regression coefficients for multivariate normal distribution. Annals of Mathematical Statistics 22 (1951), 327–351.
  • [2] C.W. Anderson, E.A. Stolz, and S. Shamsunder. Multivariate autoregressive models for classification of spontaneous electroencephalogram during mental tasks. IEEE Trans. on bio-medical engineering, 45 no 3 (1998), 277–286.
  • [3] F. Bach. Consistency of trace norm minimization, Journal of Machine Learning Research, 9 (2008), 1019–1048.
  • [4] Y. Baraud, C. Giraud and S. Huet. Estimator selection in the Gaussian setting. arXiv, :1007.2096v2
  • [5] L. Birgé and P. Massart. Minimal penalties for Gaussian model selection. Probability Theory and Related Fields, 138 no 1-2 (2007), 33–73.
  • [6] E.N. Brown, R.E. Kass, and P.P. Mitra. Multiple neural spike train data analysis: state-of-the-art and future challenges. Nature Neuroscience, 7 no 5 (2004), 456–461.
  • [7] F. Bunea, Y. She and M. Wegkamp. Adaptive rank Penalized Estimators in Multivariate Regression. arXiv :1004.2995v1, (2010)
  • [8] F. Bunea, Y. She and M. Wegkamp. Optimal selection of reduced rank estimation of high-dimensional matrices., To appear in the Annals of Statist.
  • [9] Davidson and Szarek. Handbook of the Geometry of Banach Spaces. North-Holland Publishing Co., Amsterdam, 2001.
  • [10] C. Giraud. Low rank multivariate regression. arXiv :1009.5165v1 (Sept., 2010)
  • [11] C. Giraud. A pseudo RIP for multivariate regression. arXiv :1106.5599v1, (2011)
  • [12] L. Harrison, W.D. Penny, and K. Friston. Multivariate autoregressive modeling of fmri time series. NeuroImage, 19 (2004), 1477–1491
  • [13] R.A. Horn and C.R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1994.
  • [14] A.J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of Multivariate analysis 5 (1975), 248–262.
  • [15] A.J. Izenman. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning. Springer, New York, 2008.
  • [16] V. Koltchinskii, A. Tsybakov and K. Lounici. Nuclear norm penalization and optimal rates for noisy low rank matrix completion., To appear in the Annals of Statist.
  • [17] M. Ledoux. The concentration of measure phenomenon. Mathematical Surveys and Monographs, 89. American Mathematical Society, Providence, 2001.
  • [18] Z. Lu, R. Monteiro and M. Yuan. Convex optimization methods for dimension reduction and coefficient estimation in multivariate linear regression. Mathematical Programming (to, appear).
  • [19] V. A. Marchenko, L.A. Pastur. Distribution of eigenvalues for some sets of random matrices. Mat. Sb. (N.S.), 72(114):4 (1967), 507–536.
  • [20] S. Negahban and M.J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statist. 39 (2011), no. 2, 1069–1097.
  • [21] A. Rohde, A.B. Tsybakov. Estimation of High-Dimensional Low-Rank Matrices. Ann. Statist. Volume 39, Number 2 (2011), 887–930.
  • [22] M. Rudelson, R. Vershynin. Non-asymptotic theory of random matrices: extreme singular values. Proceedings of the International Congress of Mathematicians, Hyderabad, India, (2010).
  • [23] M. Yuan, A. Ekici, Z. Lu and R. Monteiro. Dimension Reduction and Coefficient Estimation in Multivariate Linear Regression. Journal of the Royal Statistical Society, Series B, 69 (2007), 329–346.