Abstract
This paper studies schemes to de-bias the Lasso in sparse linear regression with Gaussian design where the goal is to estimate and construct confidence intervals for a low-dimensional projection of the unknown coefficient vector in a preconceived direction . Our analysis reveals that previously analyzed propositions to de-bias the Lasso require a modification in order to enjoy nominal coverage and asymptotic efficiency in a full range of the level of sparsity. This modification takes the form of a degrees-of-freedom adjustment that accounts for the dimension of the model selected by the Lasso. The degrees-of-freedom adjustment (a) preserves the success of de-biasing methodologies in regimes where previous proposals were successful, and (b) repairs the nominal coverage and provides efficiency in regimes where previous proposals produce spurious inferences and provably fail to achieve the nominal coverage. Hence our theoretical and simulation results call for the implementation of this degrees-of-freedom adjustment in de-biasing methodologies.
Let denote the number of nonzero coefficients of the true coefficient vector and Σ the population Gram matrix. The unadjusted de-biasing scheme may fail to achieve the nominal coverage as soon as if Σ is known. If Σ is unknown, the degrees-of-freedom adjustment grants efficiency for the contrast in a general direction when
where . The dependence in and is optimal and closes a gap in previous upper and lower bounds. Our construction of the estimated score vector provides a novel methodology to handle dense directions .
Beyond the degrees-of-freedom adjustment, our proof techniques yield a sharp error bound for the Lasso which is of independent interest.
Funding Statement
P.C.B. was partially supported supported by the NSF Grants DMS-1811976 and DMS-1945428.
C-H.Z. was partially supported by the NSF Grants DMS-1513378, IIS-1407939, DMS-1721495, IIS-1741390 and CCF-1934924.
Citation
Pierre C. Bellec. Cun-Hui Zhang. "De-biasing the lasso with degrees-of-freedom adjustment." Bernoulli 28 (2) 713 - 743, May 2022. https://doi.org/10.3150/21-BEJ1348
Information