Translator Disclaimer
June 2019 Nonpenalized variable selection in high-dimensional linear model settings via generalized fiducial inference
Jonathan P. Williams, Jan Hannig
Ann. Statist. 47(3): 1723-1753 (June 2019). DOI: 10.1214/18-AOS1733


Standard penalized methods of variable selection and parameter estimation rely on the magnitude of coefficient estimates to decide which variables to include in the final model. However, coefficient estimates are unreliable when the design matrix is collinear. To overcome this challenge, an entirely new perspective on variable selection is presented within a generalized fiducial inference framework. This new procedure is able to effectively account for linear dependencies among subsets of covariates in a high-dimensional setting where $p$ can grow almost exponentially in $n$, as well as in the classical setting where $p\le n$. It is shown that the procedure very naturally assigns small probabilities to subsets of covariates which include redundancies by way of explicit $L_{0}$ minimization. Furthermore, with a typical sparsity assumption, it is shown that the proposed method is consistent in the sense that the probability of the true sparse subset of covariates converges in probability to 1 as $n\to\infty$, or as $n\to\infty$ and $p\to\infty$. Very reasonable conditions are needed, and little restriction is placed on the class of possible subsets of covariates to achieve this consistency result.


Download Citation

Jonathan P. Williams. Jan Hannig. "Nonpenalized variable selection in high-dimensional linear model settings via generalized fiducial inference." Ann. Statist. 47 (3) 1723 - 1753, June 2019.


Received: 1 February 2018; Published: June 2019
First available in Project Euclid: 13 February 2019

zbMATH: 07053524
MathSciNet: MR3911128
Digital Object Identifier: 10.1214/18-AOS1733

Primary: 62A01, 62F12, 62J05

Rights: Copyright © 2019 Institute of Mathematical Statistics


This article is only available to subscribers.
It is not available for individual sale.

Vol.47 • No. 3 • June 2019
Back to Top