Bernoulli

  • Bernoulli
  • Volume 20, Number 4 (2014), 2217-2246.

Adaptive sensing performance lower bounds for sparse signal detection and support estimation

Rui M. Castro

Full-text: Open access

Abstract

This paper gives a precise characterization of the fundamental limits of adaptive sensing for diverse estimation and testing problems concerning sparse signals. We consider in particular the setting introduced in (IEEE Trans. Inform. Theory 57 (2011) 6222–6235) and show necessary conditions on the minimum signal magnitude for both detection and estimation: if $\mathbf{x}\in\mathbb{R}^{n}$ is a sparse vector with $s$ non-zero components then it can be reliably detected in noise provided the magnitude of the non-zero components exceeds $\sqrt{2/s}$. Furthermore, the signal support can be exactly identified provided the minimum magnitude exceeds $\sqrt{2\log s} $. Notably there is no dependence on $n$, the extrinsic signal dimension. These results show that the adaptive sensing methodologies proposed previously in the literature are essentially optimal, and cannot be substantially improved. In addition, these results provide further insights on the limits of adaptive compressive sensing.

Article information

Source
Bernoulli, Volume 20, Number 4 (2014), 2217-2246.

Dates
First available in Project Euclid: 19 September 2014

Permanent link to this document
https://projecteuclid.org/euclid.bj/1411134458

Digital Object Identifier
doi:10.3150/13-BEJ555

Mathematical Reviews number (MathSciNet)
MR3263103

Zentralblatt MATH identifier
1357.94030

Keywords
adaptive sensing minimax lower bounds sequential experimental design sparsity-based models

Citation

Castro, Rui M. Adaptive sensing performance lower bounds for sparse signal detection and support estimation. Bernoulli 20 (2014), no. 4, 2217--2246. doi:10.3150/13-BEJ555. https://projecteuclid.org/euclid.bj/1411134458


Export citation

References

  • [1] Addario-Berry, L., Broutin, N., Devroye, L. and Lugosi, G. (2010). On combinatorial testing problems. Ann. Statist. 38 3063–3092.
  • [2] Arias-Castro, E., Candès, E.J. and Davenport, M.A. (2013). On the fundamental limits of adaptive sensing. IEEE Trans. Inform. Theory 59 472–481.
  • [3] Arias-Castro, E., Candès, E.J., Helgason, H. and Zeitouni, O. (2008). Searching for a trail of evidence in a maze. Ann. Statist. 36 1726–1757.
  • [4] Balcan, N., Beygelzimer, A. and Langford, J. (2006). Agostic active learning. In 23rd International Conference on Machine Learning 65–72.
  • [5] Bessler, S.A. (1960). Theory and applications of the sequential design of experiments, $k$-actions and infinitely many experiments: Part I – Theory. Technical Report 55, Stanford Univ., Applied Mathematics and Statistics Laboratories.
  • [6] Blanchard, G. and Geman, D. (2005). Hierarchical testing designs for pattern recognition. Ann. Statist. 33 1155–1202.
  • [7] Butucea, C. and Ingster, Y. (2013). Detection of a sparse submatrix of a high-dimensional noisy matrix. Bernoulli 19 2652–2688.
  • [8] Cai, T.T., Jin, J. and Low, M.G. (2007). Estimation and confidence sets for sparse normal mixtures. Ann. Statist. 35 2421–2449.
  • [9] Castro, R., Willett, R. and Nowak, R. (2005). Faster rates in regression via active learning. In Advances in Neural Information Processing Systems 18 179–186.
  • [10] Castro, R.M. and Nowak, R.D. (2008). Minimax bounds for active learning. IEEE Trans. Inform. Theory 54 2339–2353.
  • [11] Chernoff, H. (1959). Sequential design of experiments. Ann. Math. Statist. 30 755–770.
  • [12] Cohn, D., Ghahramani, Z. and Jordan, M. (1996). Active learning with statistical models. J. Artificial Intelligence Res. 4 129–145.
  • [13] Dasgupta, S. (2004). Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems 17 337–344.
  • [14] Dasgupta, S. (2005). Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems 18 235–242.
  • [15] Dasgupta, S., Kalai, A. and Monteleoni, C. (2005). Analysis of perceptron-based active learning. In Eighteenth Annual Conference on Learning Theory (COLT) 249–263.
  • [16] Donoho, D. and Jin, J. (2004). Higher criticism for detecting sparse heterogeneous mixtures. Ann. Statist. 32 962–994.
  • [17] Donoho, D.L. (2006). Compressed sensing. IEEE Trans. Inform. Theory 52 1289–1306.
  • [18] El-Gamal, M.A. (1991). The role of priors in active Bayesian learning in the sequential statistical decision framework. In Maximum Entropy and Bayesian Methods (Laramie, WY, 1990). Fund. Theories Phys. 43 (W.T. Grandy and L.H. Schich, eds.) 33–38. Dordrecht: Kluwer Academic.
  • [19] Fedorov, V.V. (1972). Theory of Optimal Experiments. New York: Academic Press.
  • [20] Freund, Y., Seung, H.S., Shamir, E. and Tishby, N. (1997). Selective sampling using the query by committee algorithm. Machine Learning 28 133–168.
  • [21] Hall, P. and Molchanov, I. (2003). Sequential methods for design-adaptive estimation of discontinuities in regression curves and surfaces. Ann. Statist. 31 921–941.
  • [22] Hanneke, S. (2011). Rates of convergence in active learning. Ann. Statist. 39 333–361.
  • [23] Haupt, J., Baraniuk, R., Castro, R. and Nowak, R. (2012). Sequentially designed compressed sensing. In IEEE Statistical Signal Processing Workshop (IEEE SSP) Proceedings 401–404. Availableat http://www.win.tue.nl/~rmcastro/publications/SCS.pdf.
  • [24] Haupt, J., Castro, R.M. and Nowak, R. (2011). Distilled sensing: Adaptive sampling for sparse detection and estimation. IEEE Trans. Inform. Theory 57 6222–6235.
  • [25] Ingster, Y.I. (1997). Some problems of hypothesis testing leading to infinitely divisible distributions. Math. Methods Statist. 6 47–69.
  • [26] Ingster, Y.I. and Suslina, I.A. (2003). Nonparametric Goodness-of-fit Testing Under Gaussian Models. Lecture Notes in Statistics 169. New York: Springer.
  • [27] Kim, J.-C. and Korostelev, A. (2000). Rates of convergence for the sup-norm risk in image models under sequential designs. Statist. Probab. Lett. 46 391–399.
  • [28] Koltchinskii, V. (2010). Rademacher complexities and bounding the excess risk in active learning. J. Mach. Learn. Res. 11 2457–2485.
  • [29] Lai, T.L. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Adv. in Appl. Math. 6 4–22.
  • [30] Malloy, M. and Nowak, R. (2011). On the limits of sequential testing in high dimensions. In Asilomar Conference on Signals, Systems and Computers 1245–1249. Available at http://arxiv.org/abs/1105.4540.
  • [31] Malloy, M. and Nowak, R. (2011). Sequential analysis in high-dimensional multiple testing and[4] sparse recovery. In The IEEE International Symposium on Information Theory 2661–2665. Available at http://arxiv.org/abs/1103.5991v1.
  • [32] Meinshausen, N. and Rice, J. (2006). Estimating the proportion of false null hypotheses among a large number of independently tested hypotheses. Ann. Statist. 34 373–393.
  • [33] Novak, E. (1996). On the power of adaption. J. Complexity 12 199–237.
  • [34] Tsybakov, A.B. (2009). Introduction to Nonparametric Estimation. New York: Springer.
  • [35] Wald, A. (1947). Sequential Analysis. New York: Wiley.
  • [36] Wasserman, L. (2006). All of Nonparametric Statistics. Springer Texts in Statistics. New York: Springer.