Open Access
2013 Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization
Martin Slawski, Matthias Hein
Electron. J. Statist. 7: 3004-3056 (2013). DOI: 10.1214/13-EJS868

Abstract

Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization if the design matrix has additional properties, which are met in several applications of non-negative least squares (NNLS). We show that for these designs, the performance of NNLS with regard to prediction and estimation is comparable to that of the lasso. We argue further that in specific cases, NNLS may have a better $\ell_{\infty}$-rate in estimation and hence also advantages with respect to support recovery when combined with thresholding. From a practical point of view, NNLS does not depend on a regularization parameter and is hence easier to use.

Citation

Download Citation

Martin Slawski. Matthias Hein. "Non-negative least squares for high-dimensional linear models: Consistency and sparse recovery without regularization." Electron. J. Statist. 7 3004 - 3056, 2013. https://doi.org/10.1214/13-EJS868

Information

Published: 2013
First available in Project Euclid: 13 December 2013

zbMATH: 1280.62086
MathSciNet: MR3151760
Digital Object Identifier: 10.1214/13-EJS868

Subjects:
Primary: 62J05
Secondary: 52B99

Keywords: convex geometry , Deconvolution , high dimensions , non-negativity constraints , Persistence , random matrices , separating hyperplane , sparse recovery

Rights: Copyright © 2013 The Institute of Mathematical Statistics and the Bernoulli Society

Back to Top