The Annals of Statistics

Flexible covariance estimation in graphical Gaussian models

Bala Rajaratnam, Hélène Massam, and Carlos M. Carvalho

Full-text: Open access

Abstract

In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the WPG family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278–1323] we derive closed-form expressions for Bayes estimators under the entropy and squared-error losses. The WPG family includes the classical inverse of the hyper inverse Wishart but has many more shape parameters, thus allowing for flexibility in differentially shrinking various parts of the covariance matrix. Moreover, using this family avoids recourse to MCMC, often infeasible in high-dimensional problems. We illustrate the performance of our estimators through a collection of numerical examples where we explore frequentist risk properties and the efficacy of graphs in the estimation of high-dimensional covariance structures.

Article information

Source
Ann. Statist., Volume 36, Number 6 (2008), 2818-2849.

Dates
First available in Project Euclid: 5 January 2009

Permanent link to this document
https://projecteuclid.org/euclid.aos/1231165186

Digital Object Identifier
doi:10.1214/08-AOS619

Mathematical Reviews number (MathSciNet)
MR2485014

Zentralblatt MATH identifier
1168.62054

Subjects
Primary: 62H12: Estimation 62C10: Bayesian problems; characterization of Bayes procedures 62F15: Bayesian inference

Keywords
Covariance estimation Gaussian graphical models Bayes estimators shrinkage regularization

Citation

Rajaratnam, Bala; Massam, Hélène; Carvalho, Carlos M. Flexible covariance estimation in graphical Gaussian models. Ann. Statist. 36 (2008), no. 6, 2818--2849. doi:10.1214/08-AOS619. https://projecteuclid.org/euclid.aos/1231165186


Export citation

References

  • [1] Bernardo, J. M. (1979). Reference posterior distributions for Bayesian inference. J. Roy. Statist. Soc. Ser. B 41 113–147.
  • [2] Bickel, P. and Levina, E. (2008). Regularized estimation of large covariance matrices. Ann. Statist. 36 199–227.
  • [3] Carvalho, C. M., Massam, H. and West, M. (2007). Simulation of hyper-inverse- Wishart distributions in graphical models. Biometrika 94 647–659.
  • [4] Chen, C. (1979). Bayesian inference for a normal dispersion matrix and its application to stochastic multiple regression analysis. J. Roy. Statist. Soc. Ser. B 41 235–248.
  • [5] Clarke, B. and Barron, A. (1990). Information-theoretic asymptotics of Bayes methods. IEEE Trans. Inform. Theory 36 453–471.
  • [6] Clarke, B. and Yuan, A. (2004). Partial information reference priors: derivation and interpretations. J. Statist. Plann. Inference 123 313–345.
  • [7] Consonni, G. and Veronese, P. (2003). Enriched conjugate and reference priors for the Wishart family on the symmetric cones. Ann. Statist. 31 1491–1516.
  • [8] Daniels, M. and Kass, R. (1999). Nonconjugate Bayesian estimation of covariance matrices and its use in hierarchical models. J. Amer. Statist. Assoc. 94 1254–1263.
  • [9] Daniels, M. and Kass, R. (2001). Shrinkage estimators for covariance matrices. Biometrics 57 1173–1184.
  • [10] Datta, G. and Ghosh, M. (1995). Some remarks on noninformative priors. J. Amer. Statist. Assoc. 90 1357–1363.
  • [11] Dawid, A. P. and Lauritzen, S. L. (1993). Hyper-Markov laws in the statistical analysis of decomposable graphical models. Ann. Statist. 21 1272–317.
  • [12] Dempster, A. (1972). Covariance selection. Biometrics 28 157–175.
  • [13] Diaconis, P. and Ylvisaker, D. (1979). Conjugate priors for exponential families. Ann. Statist. 7 269–281.
  • [14] Gröne, R., Johnson, C. R., Sà, E. M. and Wolkowicz, H. (1984). Positive definite completions of partial Hermitian matrices. Linear Algebra Appl. 58 109–124.
  • [15] Haff, L. R. (1977). Minimax estimators for a multinormal precision matrix. J. Multivariate Anal. 7 374–385.
  • [16] Haff, L. R. (1980). Empirical Bayes estimation of the multivariate normal covariance matrix. Ann. Statist. 8 586–597.
  • [17] Haff, L. R. (1991). The variational form of certain Bayes estimators. Ann. Statist. 19 1163–1190.
  • [18] Huang, J. Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood. Biometrika 93 85–98.
  • [19] James, W. and Stein, C. (1961). Estimation with quadratic loss. Proc. Fourth Berkeley Symp. Math. Statist. Probab. (J. Neyman, ed.) 1 361–379. Univ. California Press, Berkeley.
  • [20] Jefferys, W. and Berger, J. (1992). Ockham’s razor and Bayesian analysis. American Scientist 80 64–72.
  • [21] Jones, B., Carvalho, C., Dobra, A., Hans, C., Carter, C. and West, M. (2005). Experiments in stochastic computation for high-dimensional graphical models. Statist. Sci. 20 388–400.
  • [22] Krishnamoorthy, K. (1991). Estimation of normal covariance and precision matrices with incomplete data. Comm. Statist. Theory Methods 20 757–770.
  • [23] Krishnamoorthy, K. and Gupta, A. (1989). Improved minimax estimation of a normal precision matrix. Canad. J. Statist. 17 91–102.
  • [24] Lauritzen, S. L. (1996). Graphical Models. Clarendon Press, Oxford.
  • [25] Ledoit, O. and Wolf, M. (2004). A well conditioned estimator for large-dimensional covariance matrices. J. Multivariate Anal. 88 365–411.
  • [26] Leonard, T. and Hsu, J. S. J. (1992). Bayesian inference for a covariance matrix. Ann. Statist. 20 1669–1696.
  • [27] Letac, G. and Massam, H. (2007). Wishart distributions for decomposable graphs. Ann. Statist. 35 1278–1323.
  • [28] Leung, P. and Muirhead, R. (1987). Estimation of parameter matrices and eigenvalues in MANOVA and canonical correlation analysis. Ann. Statist. 15 1651–1666.
  • [29] Lin, S. and Perlman, M. (1985). A Monte Carlo comparison of four estimators of a covariance matrix. In Multivariate Analysis VI (P. R. Krishnaiah, ed.) 411–429. North Holland, Amsterdam.
  • [30] Meinshausen, N. and Buhlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso. Ann. Statist. 34 1436–1462.
  • [31] Muirhead, R. J. (1982). Aspects of Multivariate Statistical Theory. Wiley, New York.
  • [32] Roverato, A. (2000). Cholesky decomposition of a hyper inverse Wishart matrix. Biometrika 87 99–112.
  • [33] Sharma, D. and Krishnamoorthy, K. (1985). Empirical Bayes estimators of normal covariance matrix. Sankhyā Ser. A 47 247–254.
  • [34] Stein, C. (1956). Some problems in multivariate analysis. Technical Report No. 6, Univ. Stanford.
  • [35] Stein, C. (1975). Estimation of a covariance matrix. In Rietz Lecture. 39th Annual Meeting. IMS, Atlanta, GA.
  • [36] Whittaker, J. (1990). Graphical Models in Applied Multivariate Statistics. Wiley, Chichester.
  • [37] Yang, R. and Berger, J. O. (1994). Estimation of a covariance matrix using the reference prior. Ann. Statist. 22 1195–1211.