- Bayesian Anal.
- Volume 5, Number 2 (2010), 369-411.
Penalized regression, standard errors, and Bayesian lassos
Penalized regression methods for simultaneous variable selection and coefficient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent years, mostly through frequentist models. Properties such as consistency have been studied, and are achieved by different lasso variations. Here we look at a fully Bayesian formulation of the problem, which is flexible enough to encompass most versions of the lasso that have been previously considered. The advantages of the hierarchical Bayesian formulations are many. In addition to the usual ease-of-interpretation of hierarchical models, the Bayesian formulation produces valid standard errors (which can be problematic for the frequentist lasso), and is based on a geometrically ergodic Markov chain. We compare the performance of the Bayesian lassos to their frequentist counterparts using simulations, data sets that previous lasso papers have used, and a difficult modeling problem for predicting the collapse of governments around the world. In terms of prediction mean squared error, the Bayesian lasso performance is similar to and, in some cases, better than, the frequentist lasso.
Bayesian Anal., Volume 5, Number 2 (2010), 369-411.
First available in Project Euclid: 20 June 2012
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Kyung, Minjung; Gill, Jeff; Ghosh, Malay; Casella, George. Penalized regression, standard errors, and Bayesian lassos. Bayesian Anal. 5 (2010), no. 2, 369--411. doi:10.1214/10-BA607. https://projecteuclid.org/euclid.ba/1340218343