Bayesian Analysis

Flexible Bayesian Dynamic Modeling of Correlation and Covariance Matrices

Shiwei Lan, Andrew Holbrook, Gabriel A. Elias, Norbert J. Fortin, Hernando Ombao, and Babak Shahbaba

Advance publication

This article is in its final form and can be cited using the date of online publication and the DOI.

Full-text: Open access


Modeling correlation (and covariance) matrices can be challenging due to the positive-definiteness constraint and potential high-dimensionality. Our approach is to decompose the covariance matrix into the correlation and variance matrices and propose a novel Bayesian framework based on modeling the correlations as products of unit vectors. By specifying a wide range of distributions on a sphere (e.g. the squared-Dirichlet distribution), the proposed approach induces flexible prior distributions for covariance matrices (that go beyond the commonly used inverse-Wishart prior). For modeling real-life spatio-temporal processes with complex dependence structures, we extend our method to dynamic cases and introduce unit-vector Gaussian process priors in order to capture the evolution of correlation among components of a multivariate time series. To handle the intractability of the resulting posterior, we introduce the adaptive Δ-Spherical Hamiltonian Monte Carlo. We demonstrate the validity and flexibility of our proposed framework in a simulation study of periodic processes and an analysis of rat’s local field potential activity in a complex sequence memory task.

Article information

Bayesian Anal., Advance publication (2018), 30 pages.

First available in Project Euclid: 4 November 2019

Permanent link to this document

Digital Object Identifier

dynamic covariance modeling spatio-temporal models geometric methods posterior contraction Δ-Spherical Hamiltonian Monte Carlo

Creative Commons Attribution 4.0 International License.


Lan, Shiwei; Holbrook, Andrew; Elias, Gabriel A.; Fortin, Norbert J.; Ombao, Hernando; Shahbaba, Babak. Flexible Bayesian Dynamic Modeling of Correlation and Covariance Matrices. Bayesian Anal., advance publication, 4 November 2019. doi:10.1214/19-BA1173.

Export citation


  • Allen, T. A., Morris, A. M., Mattfeld, A. T., Stark, C. E., and Fortin, N. J. (2014). “A Sequence of events model of episodic memory shows parallels in rats and humans.” Hippocampus, 24(10): 1178–1188.
  • Allen, T. A., Salz, D. M., McKenzie, S., and Fortin, N. J. (2016). “Nonspatial sequence coding in CA1 neurons.” Journal of Neuroscience, 36(5): 1547–1563.
  • Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis. Wiley Series in Probability and Statistics. Hoboken, N. J.: Wiley Interscience.
  • Banfield, J. D. and Raftery, A. E. (1993). “Model-based Gaussian and non-Gaussian clustering.” Biometrics, 803–821.
  • Barnard, J., McCulloch, R., and Meng, X.-L. (2000). “Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage.” Statistica Sinica, 1281–1311.
  • Bensmail, H., Celeux, G., Raftery, A. E., and Robert, C. P. (1997). “Inference in model-based cluster analysis.” Statistics and Computing, 7(1): 1–10.
  • Beskos, A., Pinski, F. J., Sanz-Serna, J. M., and Stuart, A. M. (2011). “Hybrid Monte Carlo on Hilbert spaces.” Stochastic Processes and their Applications, 121(10): 2201–2230.
  • Byrne, S. and Girolami, M. (2013). “Geodesic Monte Carlo on embedded manifolds.” Scandinavian Journal of Statistics, 40(4): 825–845.
  • Celeux, G. and Govaert, G. (1995). “Gaussian parsimonious clustering models.” Pattern recognition, 28(5): 781–793.
  • Chiu, T. Y., Leonard, T., and Tsui, K.-W. (1996). “The matrix-logarithmic covariance model.” Journal of the American Statistical Association, 91(433): 198–210.
  • Cho, H. and Fryzlewicz, P. (2015). “Multiple-change-point detection for high dimensional time series via sparsified binary segmentation.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(2): 475–507.
  • Cribben, I., Haraldsdottir, R., Atlas, L. Y., Wager, T. D., and Lindquist, M. A. (2012). “Dynamic connectivity regression: Determining state-related changes in brain connectivity.” NeuroImage, 61(4): 907–920.
  • Dahlhaus, R. (2000). “A likelihood approximation for locally stationary processes.” Annals of Statistics, 28(6): 1762–1794.
  • Daniels, M. J. (1999). “A prior for the variance in hierarchical models.” Canadian Journal of Statistics, 27(3): 567–578.
  • Daniels, M. J. and Kass, R. E. (1999). “Nonconjugate Bayesian Estimation of Covariance Matrices and its Use in Hierarchical Models.” Journal of the American Statistical Association, 94(448): 1254–1263.
  • Daniels, M. J. and Kass, R. E. (2001). “Shrinkage Estimators for Covariance Matrices.” Biometrics, 57(4): 1173–1184.
  • Dempster, A. P. (1972). “Covariance Selection.” Biometrics, 28(1): 157–175.
  • Duane, S., Kennedy, A. D., Pendleton, B. J., and Roweth, D. (1987). “Hybrid Monte Carlo”. Physics Letters B, 195(2): 216–222.
  • Fiecas, M. and Ombao, H. (2016). “Modeling the Evolution of Dynamic Brain Processes During an Associative Learning Experiment.” Journal of the American Statistical Association, 111(516): 1440–1453.
  • Fisher, R. (1953). “Dispersion on a sphere.” In Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, volume 217, 295–305. The Royal Society.
  • Fox, E. B. and Dunson, D. B. (2015). “Bayesian nonparametric covariance regression.” Journal of Machine Learning Research, 16: 2501–2542.
  • Friedman, J., Hastie, T., and Tibshirani, R. (2008). “Sparse inverse covariance estimation with the graphical lasso.” Biostatistics, 9(3): 432–441.
  • Girolami, M. and Calderhead, B. (2011). “Riemann manifold Langevin and Hamiltonian Monte Carlo methods.” Journal of the Royal Statistical Society, Series B, (with discussion) 73(2): 123–214.
  • Hoffman, M. D. and Gelman, A. (2014). “The no-U-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo.” The Journal of Machine Learning Research, 15(1): 1593–1623.
  • Holbrook, A., Lan, S., Vandenberg-Rodes, A., and Shahbaba, B. (2016). “Geodesic Lagrangian Monte Carlo over the space of positive definite matrices: with application to Bayesian spectral density estimation.” arXiv preprint arXiv:1612.08224.
  • Holbrook, A., Vandenberg-Rodes, A., Fortin, N., and Shahbaba, B. (2017). “A Bayesian supervised dual-dimensionality reduction model for simultaneous decoding of LFP and spike train signals.” Stat, 6(1): 53–67. Sta4.137.
  • Kent, J. T. (1982). “The Fisher-Bingham distribution on the sphere.” Journal of the Royal Statistical Society. Series B (Methodological), 71–80.
  • Lan, S., Holbrook, A., Elias, G. A., Fortin, N. J., Ombao, H., Shahbaba, B. (2019). “Web-based supplementary file for “Flexible Bayesian Dynamic Modeling of Correlation and Covariance Matrices”.” Bayesian Analysis.
  • Lan, S. and Shahbaba, B. (2016). Algorithmic Advances in Riemannian Geometry and Applications, chapter 2, 25–71. Advances in Computer Vision and Pattern Recognition. Springer International Publishing, 1 edition.
  • Lan, S., Stathopoulos, V., Shahbaba, B., and Girolami, M. (2015). “Markov Chain Monte Carlo from Lagrangian Dynamics.” Journal of Computational and Graphical Statistics, 24(2): 357–378.
  • Lan, S., Zhou, B., and Shahbaba, B. (2014). “Spherical Hamiltonian Monte Carlo for constrained target distributions.” volume 32, 629–637. Beijing: The 31st International Conference on Machine Learning.
  • Leonard, T. and Hsu, J. S. (1992). “Bayesian inference for a covariance matrix.” The Annals of Statistics, 1669–1696.
  • Liechty, J. C. (2004). “Bayesian correlation estimation.” Biometrika, 91(1): 1–14.
  • Lindquist, M. A., Xu, Y., Nebel, M. B., and Caffo, B. S. (2014). “Evaluating dynamic bivariate correlations in resting-state fMRI: A comparison study and a new approach.” NeuroImage, 101(Supplement C): 531–546.
  • Liu, C. (1993). “Bartlett’ s Decomposition of the Posterior Distribution of the Covariance for Normal Monotone Ignorable Missing Data.” Journal of Multivariate Analysis, 46(2): 198–206.
  • Magnus, J. R. and Neudecker, H. (1979). “The commutation matrix: some properties and applications.” The Annals of Statistics, 381–394.
  • Mardia, K. V. and Jupp, P. E. (2009). Directional statistics, volume 494. John Wiley & Sons.
  • Mardia, K. V., Kent, J. T., and Bibby, J. M. (1980). “Multivariate analysis (probability and mathematical statistics).”
  • Murray, I., Adams, R. P., and MacKay, D. J. (2010). “Elliptical slice sampling.” JMLR: Workshop and Conference Proceedings, 9: 541–548.
  • Nason, G. P., Von Sachs, R., and Kroisandt, G. (2000). “Wavelet processes and adaptive estimation of the evolutionary wavelet spectrum.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(2): 271–292.
  • Neal, R. M. (2003). “Slice sampling.” Annals of Statistics, 31(3): 705–767.
  • Neal, R. M. (2011). “MCMC using Hamiltonian dynamics.” In Brooks, S., Gelman, A., Jones, G., and Meng, X. L. (eds.), Handbook of Markov Chain Monte Carlo, 113–162. Chapman and Hall/CRC.
  • Nesterov, Y. (2009). “Primal-dual subgradient methods for convex problems.” Mathematical programming, 120(1): 221–259.
  • Ng, C.-W., Elias, G. A., Asem, J. S., Allen, T. A., and Fortin, N. J. (2017). “Nonspatial sequence coding varies along the CA1 transverse axis.” Behavioural Brain Research.
  • Ombao, H., von Sachs, R., and Guo, W. (2005). “SLEX Analysis of Multivariate Nonstationary Time Series.” Journal of the American Statistical Association, 100(470): 519–531.
  • Park, T., Eckley, I. A., and Ombao, H. C. (2014). “Estimating Time-Evolving Partial Coherence Between Signals via Multivariate Locally Stationary Wavelet Processes.” IEEE Transactions on Signal Processing, 62(20): 5240–5250.
  • Pinheiro, J. C. and Bates, D. M. (1996). “Unconstrained parametrizations for variance-covariance matrices.” Statistics and Computing, 6(3): 289–296.
  • Pourahmadi, M. (1999). “Joint mean-covariance models with applications to longitudinal data: Unconstrained parameterisation.” Biometrika, 677–690.
  • Pourahmadi, M. (2000). “Maximum likelihood estimation of generalised linear models for multivariate normal covariance matrix.” Biometrika, 425–435.
  • Pourahmadi, M. and Wang, X. (2015). “Distribution of random correlation matrices: Hyperspherical parameterization of the Cholesky factor.” Statistics & Probability Letters, 106: 5–12.
  • Prado, R. (2013). “Sequential estimation of mixtures of structured autoregressive models.” Computational Statistics & Data Analysis, 58(Supplement C): 58–70. The Third Special Issue on Statistical Signal Extraction and Filtering.
  • Prado, R., West, M., and Krystal, A. D. (2001). “Multichannel electroencephalographic analyses via dynamic regression models with time-varying lag–lead structure.” Journal of the Royal Statistical Society: Series C (Applied Statistics), 50(1): 95–109.
  • Priestley, M. (1965). “Evolutionary and non-stationary processes.” Journal of the Royal Statistical Society, Series B, 27: 204–237.
  • Rao, T. S. (1970). “The Fitting of Non-Stationary Time-Series Models with Time-Dependent Parameters.” Journal of the Royal Statistical Society. Series B (Methodological), 32(2): 312–322.
  • Rapisarda, F., Brigo, D., and Mercurio, F. (2007). “Parameterizing correlations: a geometric interpretation.” IMA Journal of Management Mathematics, 18(1): 55–73.
  • Smith, W. and Hocking, R. (1972). “Algorithm as 53: Wishart variate generator.” Journal of the Royal Statistical Society. Series C (Applied Statistics), 21(3): 341–345.
  • Sverdrup, E. (1947). “Derivation of the Wishart distribution of the second order sample moments by straightforward integration of a multiple integral.” Scandinavian Actuarial Journal, 1947(1): 151–166.
  • Ting, C. M., Seghouane, A. K., Salleh, S. H., and Noor, A. M. (2015). “Estimating Effective Connectivity from fMRI Data Using Factor-based Subspace Autoregressive Models.” IEEE Signal Processing Letters, 22(6): 757–761.
  • Tokuda, T., Goodrich, B., Van Mechelen, I., Gelman, A., and Tuerlinckx, F. (2011). “Visualizing distributions of covariance matrices.” Columbia University, New York, USA, Technical Report, 18–18.
  • Tracy, D. S. and Dwyer, P. S. (1969). “Multivariate maxima and minima with matrix derivatives.” Journal of the American Statistical Association, 64(328): 1576–1594.
  • van der Vaart, A. and van Zanten, H. (2011). “Information Rates of Nonparametric Gaussian Process Methods.” Journal of Machine Learning Research, 12: 2095–2119.
  • van der Vaart, A. W. and van Zanten, J. H. (2008a). “Rates of Contraction of Posterior Distributions Based on Gaussian Process Priors.” The Annals of Statistics, 36(3): 1435–1463.
  • van der Vaart, A. W. and van Zanten, J. H. (2008b). Reproducing kernel Hilbert spaces of Gaussian priors, volume 3 of Collections, 200–222. Beachwood, Ohio, USA: Institute of Mathematical Statistics.
  • van der Vaart, A. W. and van Zanten, J. H. (2009). “Adaptive Bayesian estimation using a Gaussian random field with inverse Gamma bandwidth.” Annals of Statistics, 37(5B): 2655–2675.
  • West, M., Prado, R., and Krystal, A. D. (1999). “Evaluation and Comparison of EEG Traces: Latent Structure in Nonstationary Time Series.” Journal of the American Statistical Association, 94(446): 375–387.
  • Wilson, A. G. and Ghahramani, Z. (2011). “Generalised Wishart Processes.” In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence.
  • Wishart, J. (1928). “The generalised product moment distribution in samples from a normal multivariate population.” Biometrika, 32–52.
  • Yang, R. and Berger, J. O. (1994). “Estimation of a covariance matrix using the reference prior.” The Annals of Statistics, 1195–1211.
  • Yang, Y. and Dunson, D. B. (2016). “Bayesian manifold regression.” The Annals of Statistics, 44(2): 876–905.

Supplemental materials