Abstract
Kernel methods for deconvolution have attractive features, and prevail in the literature. However, they have disadvantages, which include the fact that they are usually suitable only for cases where the error distribution is infinitely supported and its characteristic function does not ever vanish. Even in these settings, optimal convergence rates are achieved by kernel estimators only when the kernel is chosen to adapt to the unknown smoothness of the target distribution. In this paper we suggest alternative ridge methods, not involving kernels in any way. We show that ridge methods (a) do not require the assumption that the error-distribution characteristic function is nonvanishing; (b) adapt themselves remarkably well to the smoothness of the target density, with the result that the degree of smoothness does not need to be directly estimated; and (c) give optimal convergence rates in a broad range of settings.
Citation
Peter Hall. Alexander Meister. "A ridge-parameter approach to deconvolution." Ann. Statist. 35 (4) 1535 - 1558, August 2007. https://doi.org/10.1214/009053607000000028
Information