Translator Disclaimer
June 2010 Penalized regression, standard errors, and Bayesian lassos
George Casella, Malay Ghosh, Jeff Gill, Minjung Kyung
Bayesian Anal. 5(2): 369-411 (June 2010). DOI: 10.1214/10-BA607

Abstract

Penalized regression methods for simultaneous variable selection and coefficient estimation, especially those based on the lasso of Tibshirani (1996), have received a great deal of attention in recent years, mostly through frequentist models. Properties such as consistency have been studied, and are achieved by different lasso variations. Here we look at a fully Bayesian formulation of the problem, which is flexible enough to encompass most versions of the lasso that have been previously considered. The advantages of the hierarchical Bayesian formulations are many. In addition to the usual ease-of-interpretation of hierarchical models, the Bayesian formulation produces valid standard errors (which can be problematic for the frequentist lasso), and is based on a geometrically ergodic Markov chain. We compare the performance of the Bayesian lassos to their frequentist counterparts using simulations, data sets that previous lasso papers have used, and a difficult modeling problem for predicting the collapse of governments around the world. In terms of prediction mean squared error, the Bayesian lasso performance is similar to and, in some cases, better than, the frequentist lasso.

Citation

Download Citation

George Casella. Malay Ghosh. Jeff Gill. Minjung Kyung. "Penalized regression, standard errors, and Bayesian lassos." Bayesian Anal. 5 (2) 369 - 411, June 2010. https://doi.org/10.1214/10-BA607

Information

Published: June 2010
First available in Project Euclid: 20 June 2012

zbMATH: 1330.62289
MathSciNet: MR2719657
Digital Object Identifier: 10.1214/10-BA607

Rights: Copyright © 2010 International Society for Bayesian Analysis

JOURNAL ARTICLE
43 PAGES


SHARE
Vol.5 • No. 2 • June 2010
Back to Top