Open Access
June 2018 Uniform asymptotic inference and the bootstrap after model selection
Ryan J. Tibshirani, Alessandro Rinaldo, Rob Tibshirani, Larry Wasserman
Ann. Statist. 46(3): 1255-1287 (June 2018). DOI: 10.1214/17-AOS1584

Abstract

Recently, Tibshirani et al. [J. Amer. Statist. Assoc. 111 (2016) 600–620] proposed a method for making inferences about parameters defined by model selection, in a typical regression setting with normally distributed errors. Here, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of Tibshirani et al. (2016) is asymptotically valid, as the number of samples $n$ grows and the dimension $d$ of the regression problem stays fixed. Our asymptotic result holds uniformly over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter intervals than those from the original normality-based approach. Finally, we prove that the test statistic of Tibshirani et al. (2016) does not enjoy uniform validity in a high-dimensional setting, when the dimension $d$ is allowed grow.

Citation

Download Citation

Ryan J. Tibshirani. Alessandro Rinaldo. Rob Tibshirani. Larry Wasserman. "Uniform asymptotic inference and the bootstrap after model selection." Ann. Statist. 46 (3) 1255 - 1287, June 2018. https://doi.org/10.1214/17-AOS1584

Information

Received: 1 July 2016; Revised: 1 March 2017; Published: June 2018
First available in Project Euclid: 3 May 2018

zbMATH: 1392.62210
MathSciNet: MR3798003
Digital Object Identifier: 10.1214/17-AOS1584

Subjects:
Primary: 62F05 , 62F35 , 62J05 , 62J07

Keywords: asymptotics , bootstrap , forward stepwise regression , Lasso , Post-selection inference , selective inference

Rights: Copyright © 2018 Institute of Mathematical Statistics

Vol.46 • No. 3 • June 2018
Back to Top