Electronic Journal of Statistics

Recovering block-structured activations using compressive measurements

Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, and Aarti Singh

Full-text: Open access


We consider the problems of detection and support recovery of a contiguous block of weak activation in a large matrix, from noisy, possibly adaptively chosen, compressive (linear) measurements. We precisely characterize the tradeoffs between the various problem dimensions, the signal strength and the number of measurements required to reliably detect and recover the support of the signal, both for passive and adaptive measurement schemes. In each case, we complement algorithmic results with information-theoretic lower bounds. Analogous to the situation in the closely related problem of noisy compressed sensing, we show that for detection neither adaptivity, nor structure reduce the minimax signal strength requirement. On the other hand we show the rather surprising result that, contrary to the situation in noisy compressed sensing, the signal strength requirement to recover the support of a contiguous block-structured signal is strongly influenced by both the signal structure and the ability to choose measurements adaptively.

Article information

Electron. J. Statist., Volume 11, Number 1 (2017), 2647-2678.

Received: August 2016
First available in Project Euclid: 27 June 2017

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62F03: Hypothesis testing
Secondary: 62F10: Point estimation

Adaptive sensing linear measurements structured normal means

Creative Commons Attribution 4.0 International License.


Balakrishnan, Sivaraman; Kolar, Mladen; Rinaldo, Alessandro; Singh, Aarti. Recovering block-structured activations using compressive measurements. Electron. J. Statist. 11 (2017), no. 1, 2647--2678. doi:10.1214/17-EJS1267. https://projecteuclid.org/euclid.ejs/1498528883

Export citation


  • [1] S. Aeron, V. Saligrama, and M. Zhao. Information theoretic bounds for compressed sensing., IEEE Trans. Inf. Theory, 56(10) :5111–5130, 2010.
  • [2] E. Arias-Castro. Detecting a vector based on linear measurements., Electron. J. Stat., 6:547–558, 2012.
  • [3] E. Arias-Castro, E. J. Candés, and M. A. Davenport. On the fundamental limits of adaptive sensing., IEEE Trans. Inf. Theory, 59(1):472–481, 2013.
  • [4] E. Arias-Castro, E. J. Candés, and A. Durand. Detection of an anomalous cluster in a network., Ann. Stat., 39(1):278–304, 2011a.
  • [5] E. Arias-Castro, E. J. Candés, and Y. Plan. Global testing under sparse alternatives: Anova, multiple comparisons and the higher criticism., Ann. Stat., 39(5) :2533–2556, 2011b.
  • [6] S. Balakrishnan, M. Kolar, A. Rinaldo, and A. Singh. Recovering block-structured activations using compressive measurements. 2012., arXiv:1209.3431.
  • [7] R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde. Model-based compressive sensing., IEEE Trans. Inf. Theory, 56(4) :1982–2001, 2010.
  • [8] S. Bhamidi, P. S. Dey, and A. B. Nobel. Energy landscape for large average submatrix detection problems in gaussian random matrices. 2012., arXiv:1211.2284.
  • [9] C. Butucea and Y. I. Ingster. Detection of a sparse submatrix of a high-dimensional noisy matrix., Bernoulli, 19(5B) :2652–2688, 2013.
  • [10] C. Butucea, Y. I. Ingster, and I. Suslina. Sharp variable selection of a sparse submatrix in a high-dimensional noisy matrix. 2013., arXiv:1303.5647.
  • [11] E. J. Candés and M. A. Davenport. How well can we estimate a sparse vector?, Appl. Comput. Harmon. Anal., 34(2):317–323, 2013.
  • [12] E. J. Candés and T. Tao. The Dantzig selector: Statistical estimation when $p$ is much larger than $n$., Ann. Stat., 35(6) :2313–2351, 2007.
  • [13] E. J. Candés and M. B. Wakin. An introduction to compressive sampling., IEEE Signal Process. Mag., 25(2):21–30, 2008.
  • [14] Y. Caron, P. Makris, and N. Vincent. A method for detecting artificial objects in natural environments. In, 16th Int. Conf. Pattern Recogn., volume 1, pages 600–603. IEEE, 2002.
  • [15] R. M. Castro. Adaptive sensing performance lower bounds for sparse signal detection and support estimation. 2012., arXiv:1206.0648.
  • [16] R. M. Castro and E. Tánczos. Adaptive compressed sensing for estimation of structured sparse sets. 2014., arXiv:1410.4593.
  • [17] O. Catoni. Challenging the empirical mean and empirical variance: a deviation study., Ann. Inst. Henri Poincaré Probab. Stat., 48(4) :1148–1185, 2012.
  • [18] S. Chatterjee. Matrix estimation by universal singular value thresholding., Ann. Statist., 43(1):177–214, 2015.
  • [19] M. A. Davenport and E. Arias-Castro. Compressive binary search. 2012., arXiv:1202.0937.
  • [20] D. L. Donoho. Compressed sensing., IEEE Trans. Inf. Theory, 52(4) :1289–1306, 2006.
  • [21] M. F. Duarte, M. A. Davenport, M. J. Wainwright, and R. G. Baraniuk. Sparse signal detection from incoherent projections. In, Proc. IEEE Int. Conf. Acoustics Speed and Signal Processing, pages III–305–III–308. 2006.
  • [22] C. F. F. C. Filho, R. de Oliveira Melo, and M. G. F. Costa., Detecting Natural Gas Leaks Using Digital Images and Novelty Filters, pages 242–249. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
  • [23] G. M. Foody and A. Mathur. A relative evaluation of multiclass image classification by support vector machines., IEEE Transactions on Geoscience and Remote Sensing, 42(6) :1335–1343, 2004.
  • [24] M. Golbabaee and P. Vandergheynst. Compressed sensing of simultaneous low-rank and joint-sparse matrices. 2012., arXiv:1211.5058.
  • [25] J. D. Haupt, R. G. Baraniuk, R. M. Castro, and R. D. Nowak. Compressive distilled sensing: Sparse recovery using adaptivity in compressive measurements. In, Proc. 43rd Asilomar Conf. Signals, Systems and Computers, pages 1551–1555. IEEE, 2009.
  • [26] J. D. Haupt and R. D. Nowak. Compressive sampling for signal detection. In, Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, volume 3, pages III –1509–III–1512. IEEE, 2007.
  • [27] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity., J. Mach. Learn. Res., 12 :3371–3412, 2011.
  • [28] Y. I. Ingster, A. B. Tsybakov, and N. Verzelen. Detection boundary in sparse regression., Electron. J. Stat., 4 :1476–1526, 2010.
  • [29] M. Kolar, S. Balakrishnan, A. Rinaldo, and A. Singh. Minimax localization of structural information in large noisy matrices. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. C. N. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 909–917. 2011.
  • [30] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion., Ann. Stat., 39(5) :2302–2329, 2011.
  • [31] A. Krishnamurthy, J. Sharpnack, and A. Singh. Recovering graph-structured activations using adaptive compressive measurements. 2013., arXiv:1305.0213.
  • [32] Y. Ma, D. J. Sutherland, R. Garnett, and J. G. Schneider. Active pointillistic pattern search. In, AISTATS, volume 38 of JMLR Workshop and Conference Proceedings. JMLR.org, 2015.
  • [33] M. L. Malloy and R. D. Nowak. Near-optimal adaptive compressed sensing. In, Proc. 46th Asilomar Conf. Signals, Systems and Computers, pages 1935–1939. IEEE, 2012a.
  • [34] M. L. Malloy and R. D. Nowak. Near-optimal compressive binary search. 2012b., arXiv:1203.1804.
  • [35] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling., Ann. Stat., 39(2) :1069–1097, 2011.
  • [36] G. Reeves and M. Gastpar. The sampling rate-distortion tradeoff for sparsity pattern recovery in compressed sensing., IEEE Trans. Inf. Theory, 58(5) :3065–3092, 2012.
  • [37] E. Richard, P.-A. Savalle, and N. Vayatis. Estimation of simultaneously sparse and low rank matrices. In, Proc. 29th Int. Conf. Mach. Learn., pages 1351–1358. 2012.
  • [38] A. Soni and J. D. Haupt. Efficient adaptive compressive sensing using sparse hierarchical learned dictionaries. In, Proc. 45th Conf. Signals, Systems and Computers, pages 1250–1254. IEEE, 2011.
  • [39] A. Soni and J. D. Haupt. On the fundamental limits of recovering tree sparse vectors from noisy linear measurements. 2013., arXiv:1306.4391.
  • [40] X. Sun and A. B. Nobel. On the maximal size of large-average and ANOVA-fit submatrices in a gaussian random matrix., Bernoulli, 19(1):275–294, 2013.
  • [41] E. Tánczos and R. M. Castro. Adaptive sensing for estimation of structured sparse signals. 2013., arXiv:1311.7118.
  • [42] A. B. Tsybakov., Introduction To Nonparametric Estimation. Springer Series in Statistics. Springer, New York, 2009.
  • [43] D. Tuia, F. Ratle, F. Pacifici, M. F. Kanevski, and W. J. Emery. Active learning methods for remote sensing image classification., IEEE Transactions on Geoscience and Remote Sensing, 47(7) :2218–2232, 2009.
  • [44] M. M. Wagner, F. C. Tsui, J. U. Espino, V. M. Dato, D. F. Sittig, R. A. Caruana, L. F. McGinnis, D. W. Deerfield, M. J. Druzdzel, and D. B. Fridsma. The emerging science of very early detection of disease outbreaks., J Public Health Manag Pract, 7(6):51–59, 2001.
  • [45] M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting., IEEE Trans. Inf. Theory, 55(12) :5728–5741, 2009a.
  • [46] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using $\ell_1$-constrained quadratic programming (lasso)., IEEE Trans. Inf. Theory, 55(5) :2183–2202, 2009b.
  • [47] S. Yoon, C. Nardini, L. Benini, and G. De Micheli. Discovering coherent biclusters from gene expression data using zero-suppressed binary decision diagrams., IEEE/ACM Trans. Comput. Biol. Bioinf., 2(4):339–354, 2005.