Translator Disclaimer
2013 Geometric ergodicity of the Bayesian lasso
Kshitij Khare, James P. Hobert
Electron. J. Statist. 7: 2150-2163 (2013). DOI: 10.1214/13-EJS841


Consider the standard linear model $\mathbf{y}=X\boldsymbol{\beta}+\sigma\epsilon$, where the components of $\epsilon$ are iid standard normal errors. Park and Casella [14] consider a Bayesian treatment of this model with a Laplace/Inverse-Gamma prior on $(\beta,\sigma)$. They introduce a Data Augmentation approach that can be used to explore the resulting intractable posterior density, and call it the Bayesian lasso algorithm. In this paper, the Markov chain underlying the Bayesian lasso algorithm is shown to be geometrically ergodic, for arbitrary values of the sample size $n$ and the number of variables $p$. This is important, as geometric ergodicity provides theoretical justification for the use of Markov chain CLT, which can then be used to obtain asymptotic standard errors for Markov chain based estimates of posterior quantities. Kyung et al. [12] provide a proof of geometric ergodicity for the restricted case $n\geq p$, but as we explain in this paper, their proof is incorrect. Our approach is different and more direct, and enables us to establish geometric ergodicity for arbitrary $n$ and $p$.


Download Citation

Kshitij Khare. James P. Hobert. "Geometric ergodicity of the Bayesian lasso." Electron. J. Statist. 7 2150 - 2163, 2013.


Published: 2013
First available in Project Euclid: 10 September 2013

zbMATH: 1349.60124
MathSciNet: MR3104915
Digital Object Identifier: 10.1214/13-EJS841

Primary: 60J27
Secondary: 62F15

Rights: Copyright © 2013 The Institute of Mathematical Statistics and the Bernoulli Society


Back to Top