Bayesian Analysis

Asymptotic Properties of Bayes Risk for the Horseshoe Prior

Jyotishka Datta and Jayanta. K. Ghosh

Full-text: Open access


In this paper, we establish some optimality properties of the multiple testing rule induced by the horseshoe estimator due to Carvalho, Polson, and Scott (2010, 2009) from a Bayesian decision theoretic viewpoint. We consider the two-groups model for the data and an additive loss structure such that the total loss is equal to the number of misclassified hypotheses. We use the same asymptotic framework as Bogdan, Chakrabarti, Frommlet, and Ghosh (2011) who introduced the Bayes oracle in the context of multiple testing and provided conditions under which the Benjamini-Hochberg and Bonferroni procedures attain the risk of the Bayes oracle. We prove a similar result for the horseshoe decision rule up to O(1) with the constant in the horseshoe risk close to the constant in the oracle. We use the Full Bayes estimate of the tuning parameter τ. It is worth noting that the Full Bayes estimate cannot be replaced by the Empirical Bayes estimate, which tends to be too small.

Article information

Bayesian Anal., Volume 8, Number 1 (2013), 111-132.

First available in Project Euclid: 4 March 2013

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Multiple Testing Horseshoe Decision Rule Asymptotic Optimality Bayes Oracle


Datta, Jyotishka; Ghosh, Jayanta. K. Asymptotic Properties of Bayes Risk for the Horseshoe Prior. Bayesian Anal. 8 (2013), no. 1, 111--132. doi:10.1214/13-BA805.

Export citation


  • Abramovich, F., Benjamini, Y., Donoho, D., and Johnstone, I. (2006). “Adapting to unknown sparsity by controlling the false discovery rate.” The Annals of Statistics, 34(2): 584–653.
  • Armagan, A., Dunson, D., Lee, J., and Bajwa, W. (2011). “Posterior consistency in linear models under shrinkage priors.” Arxiv preprint arXiv:1104.4135.
  • Benjamini, Y. and Hochberg, Y. (1995). “Controlling the false discovery rate: a practical and powerful approach to multiple testing.” Journal of the Royal Statistical Society. Series B (Methodological), 289–300.
  • Bogdan, M., Chakrabarti, A., Frommlet, F., and Ghosh, J. K. (2011). “Asymptotic Bayes-optimality under sparsity of some multiple testing procedures.” The Annals of Statistics, 39(3): 1551–1579.
  • Bogdan, M., Ghosh, J. K., and Tokdar, S. T. (2008). “A comparison of the Benjamini-Hochberg procedure with some Bayesian rules for multiple testing.” In Beyond Parametrics in Interdisciplinary Research: Festschrift in Honor of Professor Pranab K. Sen, volume 1 of Institute of Mathematical Statistics Collections, 211–230.
  • Carvalho, C., Polson, N., and Scott, J. (2009). “Handling sparsity via the horseshoe.” Journal of Machine Learning Research W&CP, 5(73-80).
  • — (2010). “The horseshoe estimator for sparse signals.” Biometrika, 97(2): 465–480.
  • Diaconis, P., Goel, S., and Holmes, S. (2008). “Horseshoes in multidimensional scaling and local kernel methods.” The Annals of Applied Statistics, 2(3): 777–807.
  • Efron, B. (2004). “Large-scale simultaneous hypothesis testing.” Journal of the American Statistical Association, 99(465): 96–104.
  • — (2008). “Microarrays, empirical Bayes and the two-groups model.” Statistical Science, 23(1): 1–22.
  • Genovese, C. and Wasserman, L. (2004). “A stochastic process approach to false discovery control.” The Annals of Statistics, 32(3): 1035–1061.
  • Hans, C. (2009). “Bayesian lasso regression.” Biometrika, 96(4): 835–845.
  • Li, B. and Goel, P. K. (2006). “Regularized optimization in statistical learning: a Bayesian perspective.” Statistica Sinica, 16(2): 411–424.
  • Park, T. and Casella, G. (2008). “The Bayesian lasso.” Journal of the American Statistical Association, 103(482): 681–686.
  • Pati, D., Bhattacharya, A., Pillai, N., and Dunson, D. (2012). “Posterior contraction in sparse Bayesian factor models for massive covariance matrices.” Arxiv preprint arXiv:1206.3627.
  • Pericchi, L. R. and Smith, A. F. M. (1992). “Exact and approximate posterior moments for a normal location parameter.” Journal of the Royal Statistical Society. Series B. Methodological, 54(3): 793–804.
  • Polson, N. and Scott, J. (2010). “Large-scale simultaneous testing with hypergeometric inverted-beta priors.” Arxiv preprint arXiv:1010.5223.
  • Polson, N. G. and Scott, J. G. (2012). “On the half-Cauchy prior for a global scale parameter.” Bayesian Analysis, 7(2): 1–16.
  • Sarkar, S. (2006). “False discovery and false nondiscovery rates in single-step multiple testing procedures.” The Annals of Statistics, 34(1): 394–415.
  • Scott, J. and Berger, J. (2006). “An exploration of aspects of Bayesian multiple testing.” Journal of Statistical Planning and Inference, 136(7): 2144–2162.
  • — (2010). “Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem.” The Annals of Statistics, 38(5): 2587–2619.
  • Scott, J. G. (2011). “Bayesian estimation of intensity surfaces on the sphere via needlet shrinkage and selection.” Bayesian Analysis, 6(2): 307–327.
  • Sen, B., Banerjee, M., and Woodroofe, M. (2010). “Inconsistency of bootstrap: The Grenander estimator.” The Annals of Statistics, 38(4): 1953–1977.
  • Storey, J. (2007). “The optimal discovery procedure: a new approach to simultaneous significance testing.” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(3): 347–368.
  • Storey, J. D. (2003). “The positive false discovery rate: a Bayesian interpretation and the $q$-value.” The Annals of Statistics, 31(6): 2013–2035.
  • Strawn, N., Armagan, A., Saab, R., Carin, L., and Dunson, D. (2012). “Finite sample posterior concentration in high-dimensional regression.” Arxiv preprint arXiv:1207.4854.
  • Tibshirani, R. (1996). “Regression shrinkage and selection via the lasso.” Journal of the Royal Statistical Society. Series B. Methodological, 58(1): 267–288.