Abstract
We establish statistical properties of random-weighting methods in LASSO regression under different regularization parameters and suitable regularity conditions. The random-weighting methods in view concern repeated optimization of a randomized objective function, motivated by the need for computationally efficient uncertainty quantification in contemporary estimation settings. In the context of LASSO regression, we repeatedly assign analyst-drawn random weights to terms in the objective function, and optimize to obtain a sample of random-weighting estimators. We show that existing approaches have conditional model selection consistency and conditional asymptotic normality at different growth rates of as . We propose an extension to the available random-weighting methods and establish that the resulting samples attain conditional sparse normality and conditional consistency in a growing-dimension setting. We illustrate the proposed methodology using synthetic and benchmark data sets, and we discuss the relationship of the results to approximate nonparametric Bayesian analysis and to perturbation bootstrap methods.
Funding Statement
TLN and MAN were supported in part by the University of Wisconsin Institute for the Foundations of Data Science through grants from the US National Science Foundation (1740707, 2023239) and in part by the National Institutes of Health (P01CA250972).
Acknowledgments
The authors thank the associate editor and an anonymous referee for their valuable feedback and suggestions that lead to a substantially improved manuscript. Insights from Nick Polson and Steve Wright have also served as helpful guideposts in this effort.
Citation
Tun Lee Ng. Michael A. Newton. "Random weighting in LASSO regression." Electron. J. Statist. 16 (1) 3430 - 3481, 2022. https://doi.org/10.1214/22-EJS2020
Information