- Volume 10, Number 6 (2004), 971-988.
Persistence in high-dimensional linear predictor selection and the virtue of overparametrization
Let , be independent and identically distributed random vectors, . It is desired to predict Y by , where , under a prediction loss. Suppose that , that is, there are many more explanatory variables than observations. We consider sets Bn restricted by the maximal number of non-zero coefficients of their members, or by their l1 radius. We study the following asymptotic question: how 'large' may the set Bn be, so that it is still possible to select empirically a predictor whose risk under F is close to that of the best predictor in the set? Sharp bounds for orders of magnitudes are given under various assumptions on . Algorithmic complexity of the ensuing procedures is also studied. The main message of this paper and the implications of the orders derived are that under various sparsity assumptions on the optimal predictor there is 'asymptotically no harm' in introducing many more explanatory variables than observations. Furthermore, such practice can be beneficial in comparison with a procedure that screens in advance a small subset of explanatory variables. Another main result is that 'lasso' procedures, that is, optimization under l1 constraints, could be efficient in finding optimal sparse predictors in high dimensions.
Bernoulli, Volume 10, Number 6 (2004), 971-988.
First available in Project Euclid: 21 January 2005
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Greenshtein, Eitan; Ritov, Ya'Acov. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli 10 (2004), no. 6, 971--988. doi:10.3150/bj/1106314846. https://projecteuclid.org/euclid.bj/1106314846