We consider a problem of recovering a high-dimensional vector μ observed in white noise, where the unknown vector μ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of l0-type penalties. The penalties are associated with various choices of the prior distributions πn(⋅) on the number of nonzero entries of μ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of πn(⋅). Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors πn(⋅) for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.
"On optimality of Bayesian testimation in the normal means problem." Ann. Statist. 35 (5) 2261 - 2286, October 2007. https://doi.org/10.1214/009053607000000226