Electronic Journal of Statistics

High dimensional posterior convergence rates for decomposable graphical models

Ruoxuan Xiang, Kshitij Khare, and Malay Ghosh

Full-text: Open access


Gaussian concentration graphical models are one of the most popular models for sparse covariance estimation with high-dimensional data. In recent years, much research has gone into development of methods which facilitate Bayesian inference for these models under the standard $G$-Wishart prior. However, convergence properties of the resulting posteriors are not completely understood, particularly in high-dimensional settings. In this paper, we derive high-dimensional posterior convergence rates for the class of decomposable concentration graphical models. A key initial step which facilitates our analysis is transformation to the Cholesky factor of the inverse covariance matrix. As a by-product of our analysis, we also obtain convergence rates for the corresponding maximum likelihood estimator.

Article information

Electron. J. Statist., Volume 9, Number 2 (2015), 2828-2854.

Received: April 2015
First available in Project Euclid: 31 December 2015

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62F15: Bayesian inference
Secondary: 62G20: Asymptotic properties

Graphical models decomposable graph posterior consistency high-dimensional data


Xiang, Ruoxuan; Khare, Kshitij; Ghosh, Malay. High dimensional posterior convergence rates for decomposable graphical models. Electron. J. Statist. 9 (2015), no. 2, 2828--2854. doi:10.1214/15-EJS1084. https://projecteuclid.org/euclid.ejs/1451577218

Export citation


  • [1] Asci, C. and Piccioni, M. (2007). Functionally Compatible Local Characteristics for the Local Specification of Priors in Graphical Models., Scand. J. Stat. 34 829–840.
  • [2] Atay-Kayis, A. and Massam, H. (2005). A Monte Carlo method for computing the marginal likelihood in nondecomposable Gaussian graphical models., Biometrika 92 317–335.
  • [3] Banerjee, O., Ghaoui, L. and d’Aspremont, A. (2008). Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data., J. of Mach. Learn. Res. 9 485–516.
  • [4] Banerjee, S. and Ghosal, S. (2014). Posterior convergence rates for estimating large precision matrices using graphical models., Electron. J. Statist. 8 2111–2137.
  • [5] Banerjee, S. and Ghosal, S. (2015). Bayesian structure learning in graphical models., J. Multivariate Anal. 136 147–162.
  • [6] Batir, N. (2008). Inequalities for the gamma function., Archiv der Mathematik 91 554–563.
  • [7] Bickel, P. J. and Levina, E. (2008a). Regularized estimation of large covariance matrices., Ann. Statist. 36 199–227.
  • [8] Bickel, P. J. and Levina, E. (2008b). Covariance regularization by thresholding., Ann. Statist. 36 2577–2604.
  • [9] Dawid, A. P. and Lauritzen, S. L. (1993). Hyper-Markov laws in the statistical analysis of decomposable graphical models., Ann. Statist. 12 1272–1317.
  • [10] Dempster, A. P. (1972). Covariance Selection., Biometrics 28 157–175.
  • [11] Diaconis, P. and Ylvisaker, D. (1979). Conjugate priors for exponential families., Ann. Statist. 7 269–281.
  • [12] Friedman, J., Hastie, T. and Tibshirani, R. (2007). Sparse inverse covariance estimation with the graphical lasso., Biostatistics 0 1–10.
  • [13] Friedman, J., Hastie, T. and Tibshirani, R. (2010). Applications of the lasso and grouped lasso to the estimation of sparse graphical models., Tech. Report, Stanford Univ.
  • [14] Ghosal, S. (2000). Asymptotic normality of posterior distributions for exponential families when the number of parameters tends to infinity., J. Multivariate Anal. 74 49–68.
  • [15] Huang, J., Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood., Biometrika 93 85–98.
  • [16] Johnstone, I. (2001). On the distribution of the largest eigenvalue in principal components analysis., Ann. Statist. 29 295–327.
  • [17] Johnstone, I. and Lu, A. (2004). Sparse Principal Components Analysis., Tech. Report, Stanford Univ.
  • [18] Karoui, N. (2007). Tracy-Widom limit for the largest eigenvalue of a large class of complex sample covariance matrices., Ann. Prob. 2 663–714.
  • [19] Kershaw, D. (1983). Some Extensions of W. Gautschi’s Inequalities for the Gamma Function., Mathematics of Computation 41 607–611.
  • [20] Khare, K., Oh, S. and Rajaratnam, R. (2014). A convex pseudo-likelihood framework for high dimensional partial correlation estimation, to appear in, Journal of the Royal Statistical Society B.
  • [21] Lauritzen, S. L. (1996)., Graphical Models, Oxford Univ. Press, New York.
  • [22] Lenkoski, A. (2013). A Direct Sampler for G-Wishart Variates., Stat 2 119–128.
  • [23] Letac, G. and Massam, H. (2007). Wishart Distributions For Decomposable Graphs., Ann. Statist. 35 1278–1323.
  • [24] Meinshausen, N. and Buehlmann, P. (2006). High dimensional graphs and variable selection with the Lasso., Ann. Statist. 34 1436–1462.
  • [25] Mitsakakis, N., Massam, H. and Escobar, M. (2011). A Metropolis-Hastings based method for sampling from the G-Wishart distribution in Gaussian graphical models., Electron. J. Statist. 5 18–30.
  • [26] Pati, D., Bhattacharya, A., Pillai, N. S. and Dunson, D. (2014). Posterior contraction in sparse Bayesian factor models for massive covariance matrices., Ann. Statist. 42 1102–1130.
  • [27] Peng, J., Wang, P., Zhou, N. and Zhu, J. (2009). Partial Correlation Estimation by Joint Sparse Regression Models., J. Am. Statist. Assoc. 5 735–746.
  • [28] Rajaratnam, B., Massam, H. and Carvalho, C. M. (2008). Flexible covariance estimation in graphical Gaussian models., Ann. Statist. 36 2818–2849.
  • [29] Rocha, G., Zhao, P. and Yu, B. (2008). A path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)., Tech. Report 759, Statistics Department, UC Berkeley.
  • [30] Rose, D., Tarjan, R. and Lueker, G. (1975). Algorithmic aspects of vertex elimination on graphs, SIAM J. Comput. 5 266–283.
  • [31] Roverato, A. (2000). Cholesky decomposition of a hyper inverse Wishart matrix., Biometrika 87 99–112.
  • [32] Roverato, A. (2002). Hyper inverse Wishart distribution for non-decomposable graphs and its application to Bayesian inference for Gaussian graphical models., Scand. J. Stat. 29 391–411.
  • [33] Tarjan, R. and Yannakakis, M. (1984). Simple linear-time algorithm to test chordality of graphs, test acyclicity of hypergraphs, and selectivity reduce acyclic hypergraphs., SIAM J. Comput. 13 566–579.
  • [34] Wang, H. and Carvalho, C. (2010). Simulation of hyper-inverse Wishart distributions for non-decomposable graphs., Electron. J. Statist. 4 1470–1475.
  • [35] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model., Biometrika 94 19–35.