Bayesian Analysis

The Matrix-F Prior for Estimating and Testing Covariance Matrices

Joris Mulder and Luis Raúl Pericchi

Full-text: Open access


The matrix-F distribution is presented as prior for covariance matrices as an alternative to the conjugate inverted Wishart distribution. A special case of the univariate F distribution for a variance parameter is equivalent to a half-t distribution for a standard deviation, which is becoming increasingly popular in the Bayesian literature. The matrix-F distribution can be conveniently modeled as a Wishart mixture of Wishart or inverse Wishart distributions, which allows straightforward implementation in a Gibbs sampler. By mixing the covariance matrix of a multivariate normal distribution with a matrix-F distribution, a multivariate horseshoe type prior is obtained which is useful for modeling sparse signals. Furthermore, it is shown that the intrinsic prior for testing covariance matrices in non-hierarchical models has a matrix-F distribution. This intrinsic prior is also useful for testing inequality constrained hypotheses on variances. Finally through simulation it is shown that the matrix-variate F distribution has good frequentist properties as prior for the random effects covariance matrix in generalized linear mixed models.

Article information

Bayesian Anal., Volume 13, Number 4 (2018), 1193-1214.

First available in Project Euclid: 12 January 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

matrix-variate F distribution intrinsic prior testing inequality constraints horsehoe prior hierarchical models

Creative Commons Attribution 4.0 International License.


Mulder, Joris; Pericchi, Luis Raúl. The Matrix- $F$ Prior for Estimating and Testing Covariance Matrices. Bayesian Anal. 13 (2018), no. 4, 1193--1214. doi:10.1214/17-BA1092.

Export citation


  • Aunola, K., Leskinen, E., Lerkkanen, M.-K., and Nurmi, J.-E. (1994). “Developmental dynamics of math performance from preschool to grade 2.” Journal of Educational Psychology, 96: 699–713.
  • Barnard, J., McCulloch, R., and Meng, X.-L. (2000). “Modeling covariance matrices in terms of standard deviations and correlations, with applications to shrinkage.” Statistica Sinica, 10: 1282–1311.
  • Berger, J. O. (2006). “The case for objective Bbayesian analysis.” Bayesian Analysis, 1: 385–402.
  • Berger, J. O. and Mortera, J. (1999). “Default Bayes factors for nonnested hypothesis testing.” Journal of American Statistical Association, 94: 542–554.
  • Berger, J. O. and Pericchi, L. R. (1996). “The intrinsic Bayes factor for model selection and prediction.” Journal of the American Statistical Association, 91: 109–122.
  • Berger, J. O. and Pericchi, L. R. (2004). “Training Samples in Objective Bayesian Model Selection.” The Annals of Statistics, 32(3): 841–869.
  • Berger, J. O. and Strawderman, W. E. (1996). “Choice of hierarchical priors: Admissibility in estimation of normal means.” Annals of Statistics, 24: 931–951.
  • Böing-Messing, F., van Assen, M. A. L. M., Hofman, A. D., Hoijtink, H., and Mulder, J. (2017). “Bayesian Evaluation of Constrained Hypotheses on Variances of Multiple Independent Groups.” Psychological Methods, 22: 262–287.
  • Böing-Messing, F. and Mulder, J. (2016). “Automatic Bayes factors for testing variances of two independent normal distributions.” Journal of Mathematical Psychology, 72: 158–170.
  • Bradlow, E., Hardie, B., and Faber, P. (2002). “Closed-Form Bayesian Inference for the Negative Binomial Distribution.” Journal of Computational and Graphical Statistics, 11: 189–202.
  • Browne, W. J. and Draper, D. (2006). “A comparison of Bayesian and likelihood-based methods for fitting multilevel models.” Bayesian Analysis, 1: 473–514.
  • Campbell, D. T. and Fiske, D. W. (1959). “Convergent and discriminant validation by the multitrait-multimethod matrix.” Psychological Bulletin, 56: 81–105.
  • Carvalho, C. M., Polson, N. G., and Scott, J. G. (2009). “Sparsity via the horseshoe.” Journal of Machine Learning Research: Workshops and Case Proceedings, 5: 73–80.
  • Chung, Y., Gelman, A., Rabe-Hesketh, S., Liu, J., and Dorie, V. (2015). “Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models.” Journal of Educational and Behavioral Statistics, 40: 136–157.
  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: Lawrence Erlbaum, second edition.
  • Dawid, A. P. (1981). “Some matrix-variate distribution theory: Notational considerations and a Bayesian application.” Biometrika, 68: 265–274.
  • De Finetti, B. (1961). The Bayesian Approach to the Rejection of Outliers, 199–210. Berkeley, CA: University of California Press.
  • Gelman, A. (2006). “Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper).” Bayesian Analysis, 1: 515–534.
  • Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2014). Bayesian Data Analysis. Boca Raton: Chapman & Hall/CRC, third edition.
  • Huang, A. and Wand, M. P. (2013). “Simple Marginally Noninformative Prior Distributions for Covariance Matrices.” Bayesian Analysis, 8: 439–452.
  • Kass, R. E. and Natarajan, R. (2006). “A default conjugate prior for variance components in generalized linear mixed models (comment on article by Browne and Draper).” Bayesian Analysis, 1: 535–542.
  • Kinney, S. and Dunson, D. B. (2007). “Fixed and Random Effects Selection in Linear and Logistic Models.” Biometrics, 63: 690–698.
  • Klugkist, I., Laudy, O., and Hoijtink, H. (2005). “Inequality constrained analysis of variance: A Bayesian approach.” Psychological Methods, 10: 477–493.
  • Liang, F., Paulo, R., Molina, G., Clyde, M. A., and Berger, J. O. (2008). “Mixtures of $g$ priors for Bayesian variable selection.” Journal of American Statistical Association, 103(481): 410–423.
  • Lievens, F. and Conway, J. M. (2001). “Dimension and exercise variance in assessment center scores: A large-scale evaluation of multitrait-multimethod studies.” Journal of Applied Psychology, 86: 1202–1222.
  • Maruyama, Y. and George, E. (2011). “Fully Bayes factors with a generalized g-prior.” The Annals of Statistics, 39: 2740–2765.
  • Mathai, A. M. (2005). “A pathway to matrix-variate gamma and normal densities.” Linear Algebra and Its Applications, 396: 317–328.
  • Moreno, E., Bertolino, F., and Racugno, W. (1998). “An intrinsic limiting procedure for model selection and hypotheses testing.” Journal of the American Statistical Association, 93: 1451–1460.
  • Moreno, E. and Pericchi, L. (2014). “Intrinsic Priors for Objective Bayesian Model Selection.” Advances in Econometrics, 34: 279–300.
  • Muis, K. R., Winne, P. H., and Jamieson-Noel, D. (2007). “Using a multitrait-multimethod analysis toexamine conceptual similarities of threeself-regulated learning inventories.” British Journal of Educational Psychology, 77: 177–195.
  • Mulder, J. (2014). “Bayes factors for testing inequality constrained hypotheses: Issues with Prior Specification.” British Journal of Statistical and Mathematical Psychology, 67: 153–171.
  • Mulder, J. (2016). “Bayes factors for testing order-constrained hypotheses on correlations.” Journal of Mathematical Psychology, 72: 104–115.
  • Mulder, J., Hoijtink, H., and Klugkist, I. (2010). “Equality and Inequality Constrained Multivariate Linear Models: Objective Model Selection Using Constrained Posterior Priors.” Journal of Statistical Planning and Inference, 140: 887–906.
  • Mulder, J. and Pericchi, L. R. (2018). “Supplementary material for “The matrix-$F$ prior for estimating and testing covariance matrices”.” Bayesian Analysis.
  • Natarajan, R. and Kass, R. E. (1999). “Reference Bayesian Methods for Generalized Linear Mixed Models.” Journal of the American Statistical Association, 95: 227–237.
  • Olkin, I. and Rubin, H. (1964). “Multivariate Beta Distributions and Independence Properties of the Wishart Distribution.” The Annals of Mathematical Statistics, 35: 261–269.
  • O’Malley, A. and Zaslavsky, A. (2008). “Domain-level covariance analysis for multi- level survey data with structured nonresponse.” Journal of the American Statistical Association, 103: 1405–1418.
  • Pérez, J. M. and Berger, J. O. (2002). “Expected Posterior Prior Distributions for Model Selection.” Biometrika, 89: 491–511.
  • Pérez, M. E., Pericchi, L. R., and Ramirez, I. C. (2017). “The Scaled Beta2 Distribution as a Robust Prior for Scales.” Bayesian Analysis, 12.
  • Pericchi, L. R. (2005). “Model selection and hypothesis testing based on objective probabilities and Bayes factors.” Handbook of Statistics, 25: 115–149.
  • Polson, N. G. and Scott, J. G. (2011). Shrink globally, act locally: sparse Bayesian regularization and prediction. Oxford University Press.
  • Polson, N. G. and Scott, J. G. (2012). “On the Half-Cauchy Prior for a Global Scale Parameter.” Bayesian Analysis, 7.
  • Scott, J. G. and Berger, J. O. (2006). “An exploration of aspects of Bayesian multiple testing.” Journal of Statistical Planning and Inference, 136: 2144–2162.
  • Tan, W. Y. (1969). “Note on the multivariate and the generalized multivariate beta distributions.” Journal of American Statistical Association, 64: 230–41.
  • Wang, M. and Sun, X. (2013). “Bayes Factor Consistency for One-way Random Effects Model.” Statistics: A Journal of Theoretical and Applied Statistics, 47: 1104–1115.
  • Westfall, P. and Gönen, M. (1996). “Asymptotic properties of anova Bayes factors.” Communications in Statistics: Theory and Methods, 25: 3101–3123.
  • Zeger, S. L. and Karim, M. R. (1991). “Generalized Linear Models With Random Effects; A Gibbs Sampling Approach.” Journal of the American Statistical Association, 86: 79–86.

Supplemental materials

  • Supplementary material for “The matrix-F prior for estimating and testing covariance matrices”. The Supplementary Material for “The matrix-F prior for estimating and testing covariance matrices” contains a proof that the matrix-F distribution has the reciprocity property (Section 1); a derivation of the means and (co)variances of the elements of a random matrix having a matrix-F distribution (Section 2); the derivation of the intrinsic prior for a precise hypothesis test of a covariance matrix and the resulting intrinsic Bayes factor (Section 3); a proof that the intrinsic Bayes factor is consistent (Section 4); and a derivation of the Bayes factor of an inequality-constrained covariance matrix against an unconstrained covariance matrix (Section 5).