Electronic Journal of Statistics

A note on the approximate admissibility of regularized estimators in the Gaussian sequence model

Xi Chen, Adityanand Guntuboyina, and Yuchen Zhang

Full-text: Open access


We study the problem of estimating an unknown vector $\theta $ from an observation $X$ drawn according to the normal distribution with mean $\theta $ and identity covariance matrix under the knowledge that $\theta $ belongs to a known closed convex set $\Theta $. In this general setting, Chatterjee (2014) proved that the natural constrained least squares estimator is “approximately admissible” for every $\Theta $. We extend this result by proving that the same property holds for all convex penalized estimators as well. Moreover, we simplify and shorten the original proof considerably. We also provide explicit upper and lower bounds for the universal constant underlying the notion of approximate admissibility.

Article information

Electron. J. Statist., Volume 11, Number 2 (2017), 4746-4768.

Received: March 2017
First available in Project Euclid: 24 November 2017

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Admissibility Bayes risk Gaussian sequence model least squares estimator minimaxity

Creative Commons Attribution 4.0 International License.


Chen, Xi; Guntuboyina, Adityanand; Zhang, Yuchen. A note on the approximate admissibility of regularized estimators in the Gaussian sequence model. Electron. J. Statist. 11 (2017), no. 2, 4746--4768. doi:10.1214/17-EJS1354. https://projecteuclid.org/euclid.ejs/1511492461

Export citation


  • Bellec, P. C. (2017). Optimistic lower bounds for convex regularized least-squares., arXiv preprint arXiv:1703.01332.
  • Bühlmann, P. andvan De Geer, S. (2011)., Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media.
  • Chatterjee, S. (2014). A new perspective on least squares under convex constraint., The Annals of Statistics 42 2340–2381.
  • Chen, X., Guntuboyina, A. andZhang, Y. (2016). On Bayes risk lower bounds., Journal of Machine Learning Research 17 1–58.
  • Groeneboom, P. andJongbloed, G. (2014)., Nonparametric Estimation under Shape Constraints: Estimators, Algorithms and Asymptotics 38. Cambridge University Press.
  • Le Cam, L. (1973). Convergence of estimates under dimensionality restrictions., Annals of Statistics 1 38–53.
  • Lehmann, E. L. andCasella, G. (1998)., Theory of Point Estimation, 2nd ed. Springer, New York.
  • Muro, A. andvan de Geer, S. (2015). Concentration behavior of the penalized least squares estimator., arXiv preprint arXiv:1511.08698.
  • Tsybakov, A. (2009)., Introduction to Nonparametric Estimation. Springer-Verlag.
  • van de Geer, S. andWainwright, M. (2015). On concentration for (regularized) empirical risk minimization., arXiv preprint arXiv:1512.00677.
  • Woodroofe, M. andSun, J. (1993). A penalized maximum likelihood estimate of f (0+) when f is non-increasing., Statistica Sinica 501–515.
  • Zhang, L. (2013). Nearly optimal minimax estimator for high-dimensional sparse linear regression., The Annals of Statistics 41 2149–2175.