Abstract
Recently, Tibshirani et al. [J. Amer. Statist. Assoc. 111 (2016) 600–620] proposed a method for making inferences about parameters defined by model selection, in a typical regression setting with normally distributed errors. Here, we study the large sample properties of this method, without assuming normality. We prove that the test statistic of Tibshirani et al. (2016) is asymptotically valid, as the number of samples $n$ grows and the dimension $d$ of the regression problem stays fixed. Our asymptotic result holds uniformly over a wide class of nonnormal error distributions. We also propose an efficient bootstrap version of this test that is provably (asymptotically) conservative, and in practice, often delivers shorter intervals than those from the original normality-based approach. Finally, we prove that the test statistic of Tibshirani et al. (2016) does not enjoy uniform validity in a high-dimensional setting, when the dimension $d$ is allowed grow.
Citation
Ryan J. Tibshirani. Alessandro Rinaldo. Rob Tibshirani. Larry Wasserman. "Uniform asymptotic inference and the bootstrap after model selection." Ann. Statist. 46 (3) 1255 - 1287, June 2018. https://doi.org/10.1214/17-AOS1584
Information