In the present paper, we consider the application of overcomplete dictionaries to the solution of general ill-posed linear inverse problems. In the context of regression problems, there has been an enormous amount of effort to recover an unknown function using an overcomplete dictionary. One of the most popular methods, Lasso and its variants, is based on maximizing the likelihood, and relies on stringent assumptions on the dictionary, the so-called compatibility conditions, for a proof of its convergence rates. While these conditions may be satisfied for the original dictionary functions, they usually do not hold for their images due to contraction properties imposed by the linear operator.
In what follows, we bypass this difficulty by a novel approach, which is based on inverting each of the dictionary functions and matching the resulting expansion to the true function, thus, avoiding unrealistic assumptions on the dictionary and using Lasso in a predictive setting. We examine both the white noise and the observational model formulations, and also discuss how exact inverse images of the dictionary functions can be replaced by their approximate counterparts. Furthermore, we show how the suggested methodology can be extended to the problem of estimation of a mixing density in a continuous mixture. For all the situations listed above, we provide sharp oracle inequalities for the risk in a non-asymptotic setting.
"Solution of linear ill-posed problems using overcomplete dictionaries." Ann. Statist. 44 (4) 1739 - 1764, August 2016. https://doi.org/10.1214/16-AOS1445