Electronic Journal of Statistics

Sign-constrained least squares estimation for high-dimensional regression

Nicolai Meinshausen

Full-text: Open access

Abstract

Many regularization schemes for high-dimensional regression have been put forward. Most require the choice of a tuning parameter, using model selection criteria or cross-validation. We show that a simple sign-constrained least squares estimation is a very simple and effective regularization technique for a certain class of high-dimensional regression problems. The sign constraint has to be derived via prior knowledge or an initial estimator. The success depends on conditions that are easy to check in practice. A sufficient condition for our results is that most variables with the same sign constraint are positively correlated. For a sparse optimal predictor, a non-asymptotic bound on the $\ell_{1}$-error of the regression coefficients is then proven. Without using any further regularization, the regression vector can be estimated consistently as long as $s^{2}\log(p)/n\rightarrow 0$ for $n\rightarrow\infty$, where $s$ is the sparsity of the optimal regression vector, $p$ the number of variables and $n$ sample size. The bounds are almost as tight as similar bounds for the Lasso for strongly correlated design despite the fact that the method does not have a tuning parameter and does not require cross-validation. Network tomography is shown to be an application where the necessary conditions for success of sign-constrained least squares are naturally fulfilled and empirical results confirm the effectiveness of the sign constraint for sparse recovery if predictor variables are strongly correlated.

Article information

Source
Electron. J. Statist., Volume 7 (2013), 1607-1631.

Dates
First available in Project Euclid: 4 June 2013

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1370359951

Digital Object Identifier
doi:10.1214/13-EJS818

Mathematical Reviews number (MathSciNet)
MR3066380

Zentralblatt MATH identifier
1327.62422

Subjects
Primary: 62J07: Ridge regression; shrinkage estimators 62J05: Linear regression

Keywords
Variable selection shrinkage estimators quadratic programming high-dimensional linear models

Citation

Meinshausen, Nicolai. Sign-constrained least squares estimation for high-dimensional regression. Electron. J. Statist. 7 (2013), 1607--1631. doi:10.1214/13-EJS818. https://projecteuclid.org/euclid.ejs/1370359951


Export citation

References

  • Bellavia, S., Macconi, M., and Morini, B., An interior point newton-like method for non-negative least-squares problems with degenerate solution., Numerical Linear Algebra with Applications, 13:825–846, 2006.
  • Belloni, A., Chernozhukov, V., and Wang, L., Square-root lasso: pivotal recovery of sparse signals via conic programming., Biometrika, 98:791–806, 2011.
  • Bickel, P., Ritov, Y., and Tsybakov, A., Simultaneous analysis of Lasso and Dantzig selector., Annals of Statistics, 37 :1705–1732, 2009.
  • Boutsidis, C. and Drineas, P., Random projections for the nonnegative least-squares problem., Linear Algebra and its Applications, 431:760–771, 2009.
  • Breiman, L., Better subset regression using the nonnegative garrote., Technometrics, 37:373–384, 1995.
  • Bruckstein, A.M., Elad, M., and Zibulevsky, M., On the uniqueness of nonnegative sparse solutions to underdetermined systems of equations., IEEE Transactions on Information Theory, 54 :4813–4820, 2008.
  • Bunea, B.F., Tsybakov, A.B., and Wegkamp, M.H., Aggregation for gaussian regression., Annals of Statistics, 35 :1674–1697, 2007a.
  • Bunea, F., Tsybakov, A., and Wegkamp, M., Sparsity oracle inequalities for the lasso., Electronic Journal of Statistics, 1:169–194, 2007b.
  • Bunea, F., Lederer, J., and She, Y., The group square-root lasso: Theoretical properties and fast algorithms., arXiv preprint arXiv :1302.0261, 2013.
  • Candes, E. and Tao, T., The Dantzig selector: statistical estimation when $p$ is much larger than $n$., Annals of Statistics, 35 :2312–2351, 2007.
  • Castro, R., Coates, M., Liang, G., Nowak, R., and Yu, B., Network tomography: Recent developments., Statistical Science, 3:499–517, 2004.
  • Chen, D. and Plemmons, R.J., Nonnegativity constraints in numerical analysis. World Scientific Press, River Edge, NJ, USA, 2009.
  • Ding, C.H.Q., Li, T., and Jordan, M.I., Convex and semi-nonnegative matrix factorizations., IEEE Transactions on Pattern Analysis and Machine Intelligence, 32:45–55, 2010.
  • Donoho, D.L., Johnstone, I.M., Hoch, J.C., and Stern, A.S., Maximum entropy and the nearly black object., Journal of the Royal Statistical Society. Series B, 54:41–81, 1992.
  • Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R., Least angle regression., Annals of Statistics, 32:407–451, 2004.
  • Guo, Yi and Berman, Mark, A comparison between subset selection and l1 regularisation with an application in spectroscopy., Chemometrics and Intelligent Laboratory Systems, 118:127–138, 2012.
  • Hoerl, Arthur E. and Kennard, Robert W., Ridge regression: Biased estimation for nonorthogonal problems., Technometrics, 12:55–67, 1970.
  • Kim, D., Sra, S., and Dhillon, I.S., A new projected quasi-newton approach for the nonnegative least squares problem. Technical report, Department of Computer Science, University of Texas, TR-06-54, 2006.
  • Kim, H. and Park, H., Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis., Bioinformatics, 23 :1495, 2007.
  • Lawrence, E., Michailidis, G., and Nair, V.N., Network delay tomography using flexicast experiments., Journal of the Royal Statistical Society: Series B, 68:785–813, 2006.
  • Lawson, C.L. and Hanson, R.J., Solving least squares problems, volume 15. Society for Industrial Mathematics, 1995.
  • Lee, D.D. and Seung, H.S., Algorithms for non-negative matrix factorization., Advances in neural information processing systems, 13, 2001.
  • Lee, D.D., Seung, H.S., et al. Learning the parts of objects by non-negative matrix factorization., Nature, 401:788–791, 1999.
  • Meinshausen, N., Relaxed lasso., Computational Statistics and Data Analysis, 52:374–393, 2007.
  • Meinshausen, N. and Yu, B., Lasso-type recovery of sparse representations from high-dimensional data., Annals of Statistics, 7:246–270, 2009.
  • Slawski, M. and Hein, M., Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization., arXiv preprint arXiv :1205.0953, 2012.
  • Slawski, M., Hein, M., and Campus, E., Sparse recovery by thresholded non-negative least squares. Technical report, Department of Computer Science, University of Saarbruecken, 2011.
  • Tibshirani, R., Regression shrinkage and selection via the lasso., Journal of the Royal Statistical Society, Series B, 58:267–288, 1996.
  • Van De Geer, S.A., High-dimensional generalized linear models and the lasso., The Annals of Statistics, 36:614–645, 2008.
  • Van De Geer, S.A. and Bühlmann, P., On the conditions used to prove oracle results for the lasso., Electronic Journal of Statistics, 3 :1360–1392, 2009.
  • Wainwright, M.J., Sharp thresholds for high-dimensional and noisy recovery of sparsity., IEEE Transactions on Information Theory, 55 :2183–2202, 2009.
  • Waterman, M.S., Least squares with nonnegative regression coefficients., Journal of Statistical Computation and Simulation, 6:67–70, 1977.
  • Xi, B., Michailidis, G., and Nair, V.N., Estimating network loss rates using active tomography., Journal of the American Statistical Association, 101 :1430–1448, 2006.
  • Yuan, M. and Lin, Y., Model selection and estimation in regression with grouped variables., Journal of the Royal Statistical Society: Series B, 68:49–67, 2006.
  • Zhang, C.H. and Huang, J., The sparsity and bias of the lasso selection in high-dimensional linear regression., Annals of Statistics, 36 :1567–1594, 2008.
  • Zou, H., The adaptive lasso and its oracle properties., Journal of the American Statistical Association, 101 :1418–1429, 2006.