Electronic Journal of Statistics

The horseshoe estimator: Posterior concentration around nearly black vectors

S. L. van der Pas, B. J. K. Kleijn, and A. W. van der Vaart

Full-text: Open access


We consider the horseshoe estimator due to Carvalho, Polson and Scott (2010) for the multivariate normal mean model in the situation that the mean vector is sparse in the nearly black sense. We assume the frequentist framework where the data is generated according to a fixed mean vector. We show that if the number of nonzero parameters of the mean vector is known, the horseshoe estimator attains the minimax $\ell_{2}$ risk, possibly up to a multiplicative constant. We provide conditions under which the horseshoe estimator combined with an empirical Bayes estimate of the number of nonzero means still yields the minimax risk. We furthermore prove an upper bound on the rate of contraction of the posterior distribution around the horseshoe estimator, and a lower bound on the posterior variance. These bounds indicate that the posterior distribution of the horseshoe prior may be more informative than that of other one-component priors, including the Lasso.

Article information

Electron. J. Statist., Volume 8, Number 2 (2014), 2585-2618.

First available in Project Euclid: 9 December 2014

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62F15: Bayesian inference 62F10: Point estimation

Sparsity horseshoe prior worst case risk Bayesian inference empirical Bayes posterior contraction normal means model


van der Pas, S. L.; Kleijn, B. J. K.; van der Vaart, A. W. The horseshoe estimator: Posterior concentration around nearly black vectors. Electron. J. Statist. 8 (2014), no. 2, 2585--2618. doi:10.1214/14-EJS962. https://projecteuclid.org/euclid.ejs/1418134265

Export citation


  • Armagan, A., Dunson, D. B. and Lee, J. (2013). Generalized Double Pareto Shrinkage., Statistica Sinica 23 119–143.
  • Bhattacharya, A., Pati, D., Pillai, N. S. and Dunson, D. B. (2012). Bayesian Shrinkage., arXiv:1212.6088.
  • Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009). Simultaneous Analysis of Lasso and Dantzig Selector., The Annals of Statistics 37 1705–1732.
  • Bogdan, M., Ghosh, J. K. and Tokdar, S. T. (2008). A Comparison of the Benjamini-Hochberg Procedure with Some Bayesian Rules for Multiple Testing. In, Beyond Parametrics in Interdisciplinary Research: Festschrift in Honor of Professor Pranab K. Sen. The Institute of Mathematical Statistics.
  • Carvalho, C. M., Polson, N. G. and Scott, J. G. (2009). Handling Sparsity via the Horseshoe., Journal of Machine Learning Research, W&CP 5 73–80.
  • Carvalho, C. M., Polson, N. G. and Scott, J. G. (2010). The Horseshoe Estimator for Sparse Signals., Biometrika 97 465–480.
  • Castillo, I., Schmidt-Hieber, J. and Van der Vaart, A. W. (2014). Bayesian Linear Regression with Sparse Priors., arXiv:1403.0735.
  • Castillo, I. and Van der Vaart, A. W. (2012). Needles and Straw in a Haystack: Posterior Concentration for Possibly Sparse Sequences., The Annals of Statistics 40 2069–2101.
  • Chernoff, H. (1952). A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the Sum of Observations., The Annals of Mathematical Statistics 23 493–507.
  • Datta, J. and Ghosh, J. K. (2013). Asymptotic Properties of Bayes Risk for the Horseshoe Prior., Bayesian Analysis 8 111–132.
  • Donoho, D. L., Johnstone, I. M., Hoch, J. C. and Stern, A. S. (1992). Maximum Entropy and the Nearly Black Object (with discussion)., Journal of the Royal Statistical Society. Series B (Methodological) 54 41–81.
  • Efron, B. (2008). Microarrays, Empirical Bayes and the Two-Groups Model., Statistical Science 23 1–22.
  • Ghosal, S., Ghosh, J. K. and Van der Vaart, A. W. (2000). Convergence Rates of Posterior Distributions., The Annals of Statistics 28 500–531.
  • Gradshteyn, I. S. and Ryzhik, I. M. (1965)., Table of Integrals, Series and Products. Academic Press.
  • Griffin, J. E. and Brown, P. J. (2010). Inference with Normal-Gamma Prior Distributions in Regression Problems., Bayesian Analysis 5 171–188.
  • Jiang, W. and Zhang, C.-H. (2009). General Maximum Likelihood Empirical Bayes Estimation of Normal Means., The Annals of Statistics 37 1647–1684.
  • Johnstone, I. M. and Silverman, B. W. (2004). Needles and Straw in Haystacks: Empirical Bayes Estimates of Possibly Sparse Sequences., The Annals of Statistics 32 1594–1649.
  • Koenker, R. (2014). A Gaussian Compound Decision Bakeoff., Stat. 3 12–16.
  • Koenker, R. and Mizera, I. (2014). Convex Optimization, Shape Constraints, Compound Decisions and Empirical Bayes Rules., Journal of the American Statistical Association 109 674–685.
  • Martin, R. and Walker, S. G. (2014). Asymptotically Minimax Empirical Bayes Estimation of a Sparse Normal Mean Vector., Electronic Journal of Statistics 8 2188–2206.
  • Miller, P. D. (2006)., Applied Asymptotic Analysis. Graduate Studies in Mathematics 75. The American Mathematical Society.
  • Mitchell, T. J. and Beauchamp, J. J. (1988). Bayesian Variable Selection in Linear Regression., Journal of the American Statistical Association 83 1023–1032.
  • Pericchi, L. R. and Smith, A. F. M. (1992). Exact and Approximate Posterior Moments for a Normal Location Parameter., Journal of the Royal Statistical Society. Series B (Methodological) 54 793–804.
  • Polson, N. G. and Scott, J. G. (2010). Shrink Globally, Act Locally: Sparse Bayesian Regularization and Prediction. In, Bayesian Statistics 9 (J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West, eds.) Oxford University Press.
  • Polson, N. G. and Scott, J. G. (2012a). Good, Great or Lucky? Screening for Firms with Sustained Superior Performance Using Heavy-Tailed Priors., The Annals of Applied Statistics 6 161–185.
  • Polson, N. G. and Scott, J. G. (2012b). On the Half-Cauchy Prior for a Global Scale Parameter., Bayesian Analysis 7 887–902.
  • Scott, J. G. (2010). Parameter Expansion in Local-Shrinkage Models., arXiv:1010.5265.
  • Scott, J. G. (2011). Bayesian Estimation of Intensity Surfaces on the Sphere via Needlet Shrinkage and Selection., Bayesian Analysis 6 307–328.
  • Scott, J. G. and Berger, J. O. (2010). Bayes and Empirical-Bayes Multiplicity Adjustment in the Variable-Selection Problem., The Annals of Statistics 38 2587–2619.
  • Shafer, R. E. (1966). Elementary Problems. Problem E 1867., The American Mathematical Monthly 73 309.
  • Tibshirani, R. (1996). Regression Shrinkage and Selection via the Lasso., Journal of the Royal Statistical Society. Series B (Methodological) 58 267–288.
  • Yuan, M. and Lin, Y. (2005). Efficient Empirical Bayes Variable Selection and Estimation in Linear Models., Journal of the American Statistical Association 100 1215–1225.