Open Access
October 2009 Near-ideal model selection by 1 minimization
Emmanuel J. Candès, Yaniv Plan
Ann. Statist. 37(5A): 2145-2177 (October 2009). DOI: 10.1214/08-AOS653

Abstract

We consider the fundamental problem of estimating the mean of a vector y=+z, where X is an n×p design matrix in which one can have far more variables than observations, and z is a stochastic error term—the so-called “p>n” setup. When β is sparse, or, more generally, when there is a sparse subset of covariates providing a close approximation to the unknown mean vector, we ask whether or not it is possible to accurately estimate using a computationally tractable algorithm.

We show that, in a surprisingly wide range of situations, the lasso happens to nearly select the best subset of variables. Quantitatively speaking, we prove that solving a simple quadratic program achieves a squared error within a logarithmic factor of the ideal mean squared error that one would achieve with an oracle supplying perfect information about which variables should and should not be included in the model. Interestingly, our results describe the average performance of the lasso; that is, the performance one can expect in an vast majority of cases where is a sparse or nearly sparse superposition of variables, but not in all cases.

Our results are nonasymptotic and widely applicable, since they simply require that pairs of predictor variables are not too collinear.

Citation

Download Citation

Emmanuel J. Candès. Yaniv Plan. "Near-ideal model selection by 1 minimization." Ann. Statist. 37 (5A) 2145 - 2177, October 2009. https://doi.org/10.1214/08-AOS653

Information

Published: October 2009
First available in Project Euclid: 15 July 2009

zbMATH: 1173.62053
MathSciNet: MR2543688
Digital Object Identifier: 10.1214/08-AOS653

Subjects:
Primary: 62C05 , 62G05
Secondary: 94A08 , 94A12

Keywords: compressed sensing , Eigenvalues of random matrices , incoherence , Model selection , Oracle inequalities , the lasso

Rights: Copyright © 2009 Institute of Mathematical Statistics

Vol.37 • No. 5A • October 2009
Back to Top