Electronic Journal of Statistics

Exact adaptive confidence intervals for linear regression coefficients

Peter Hoff and Chaoyu Yu

Full-text: Open access


We propose an adaptive confidence interval procedure (CIP) for the coefficients in the normal linear regression model. This procedure has a frequentist coverage rate that is constant as a function of the model parameters, yet provides smaller intervals than the usual interval procedure, on average across regression coefficients. The proposed procedure is obtained by defining a class of CIPs that all have exact $1-\alpha $ frequentist coverage, and then selecting from this class the procedure that minimizes a prior expected interval width. We describe an adaptive approach for estimating the prior distribution from the data, so that the potential risk of a poorly specified prior is reduced. The resulting adaptive confidence intervals maintain exact non-asymptotic $1-\alpha $ coverage if two conditions are met - that the design matrix is full rank (which will be known) and that the errors are normally distributed (which can be checked empirically). No assumptions on the unknown parameters are necessary to maintain exact coverage. Additionally, in a “$p$ growing with $n$” asymptotic scenario, this adaptive FAB procedure is asymptotically Bayes-optimal among $1-\alpha $ frequentist CIPs.

Article information

Electron. J. Statist., Volume 13, Number 1 (2019), 94-119.

Received: June 2018
First available in Project Euclid: 4 January 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62J05: Linear regression

Empirical Bayes frequentist coverage ridge regression shrinkage sparsity

Creative Commons Attribution 4.0 International License.


Hoff, Peter; Yu, Chaoyu. Exact adaptive confidence intervals for linear regression coefficients. Electron. J. Statist. 13 (2019), no. 1, 94--119. doi:10.1214/18-EJS1517. https://projecteuclid.org/euclid.ejs/1546570943

Export citation


  • Andrews, D. W. K. (1992). Generic uniform convergence., Econometric Theory 8(2), 241–257.
  • Bartholomew, D. J. (1971). A comparison of frequentist and Bayesian approaches to inference with prior knowledge. pp. 417–434. With comments by G. A. Barnard, R. J. Buehler, D. R. Cox, V. P. Godambe, I. J. Good, W. J. Hall, J. A. Hartigan, O. Kempthorne and J. W. Pratt and a reply by the, author.
  • Bühlmann, P. (2013). Statistical significance in high-dimensional linear models., Bernoulli 19(4), 1212–1242.
  • Carlin, B. P. and A. E. Gelfand (1990). Approaches for empirical Bayes confidence intervals., J. Amer. Statist. Assoc. 85(409), 105–114.
  • Conlon, E. M., X. S. Liu, J. D. Lieb, and J. S. Liu (2003). Integrating regulatory motif discovery and genome-wide expression analysis., Proceedings of the National Academy of Sciences 100(6), 3339–3344.
  • Efron, B., T. Hastie, I. Johnstone, and R. Tibshirani (2004). Least angle regression., Ann. Statist. 32(2), 407–499. With discussion, and a rejoinder by the authors.
  • Farchione, D. and P. Kabaila (2008). Confidence intervals for the normal mean utilizing prior information., Statist. Probab. Lett. 78(9), 1094–1100.
  • Kabaila, P. and K. Giri (2009). Confidence intervals in regression utilizing prior information., J. Statist. Plann. Inference 139(10), 3419–3429.
  • Kabaila, P. and D. Tissera (2014). Confidence intervals in regression that utilize uncertain prior information about a vector parameter., Australian & New Zealand Journal of Statistics 56(4), 371–383.
  • LeCam, L. (1953). On some asymptotic properties of maximum likelihood estimates and related Bayes’ estimates., Univ. California Publ. Statist. 1, 277–329.
  • Lee, J. D., D. L. Sun, Y. Sun, and J. E. Taylor (2016). Exact post-selection inference, with application to the lasso., Ann. Statist. 44(3), 907–927.
  • Leeb, H. and B. M. Pötscher (2008). Sparse estimators and the oracle property, or the return of Hodges’ estimator., J. Econometrics 142(1), 201–211.
  • Meinshausen, N., L. Meier, and P. Bühlmann (2009). $p$-values for high-dimensional regression., J. Amer. Statist. Assoc. 104(488), 1671–1681.
  • Mitchell, T. J. and J. J. Beauchamp (1988). Bayesian variable selection in linear regression., J. Amer. Statist. Assoc. 83(404), 1023–1036. With comments by James Berger and C. L. Mallows and with a reply by the authors.
  • Newey, W. K. and D. McFadden (1994). Large sample estimation and hypothesis testing. In, Handbook of econometrics, Vol. IV, Volume 2 of Handbooks in Econom., pp. 2111–2245. North-Holland, Amsterdam.
  • O’Gorman, T. W. (2001). Using adaptive weighted least squares to reduce the lengths of confidence intervals., Canad. J. Statist. 29(3), 459–471.
  • Pratt, J. W. (1963). Shorter confidence intervals for the mean of a normal distribution with known variance., The Annals of Mathematical Statistics 34(2), 574–586.
  • Puza, B. and T. O’Neill (2006). Interval estimation via tail functions., Canadian Journal of Statistics 34(2), 299–310.
  • Ranga Rao, R. (1962). Relations between weak and uniform convergence of measures with applications., Ann. Math. Statist. 33, 659–680.
  • Stein, C. M. (1962). Confidence sets for the mean of a multivariate normal distribution., J. Roy. Statist. Soc. Ser. B 24, 265–296.
  • van de Geer, S., P. Bühlmann, Y. Ritov, and R. Dezeure (2014). On asymptotically optimal confidence regions and tests for high-dimensional models., Ann. Statist. 42(3), 1166–1202.
  • Yoshimori, M. and P. Lahiri (2014). A second-order efficient empirical Bayes confidence interval., Ann. Statist. 42(4), 1–29.
  • Yu, C. and P. D. Hoff (2018). Adaptive multigroup confidence intervals with constant coverage., Biometrika 105(2), 319–335.
  • Zhang, C.-H. and S. S. Zhang (2014). Confidence intervals for low dimensional parameters in high dimensional linear models., J. R. Stat. Soc. Ser. B. Stat. Methodol. 76(1), 217–242.