Open Access
June, 1993 Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems
Ja-Yong Koo
Ann. Statist. 21(2): 590-599 (June, 1993). DOI: 10.1214/aos/1176349138

Abstract

Consider an unknown regression function $f$ of the response $Y$ on a $d$-dimensional measurement variable $X$. It is assumed that $f$ belongs to a class of functions having a smoothness measure $p$. Let $T$ denote a known linear operator of order $q$ which maps $f$ to another function $T(f)$ in a space $G$. Let $\hat{T}_n$ denote an estimator of $T(f)$ based on a random sample of size $n$ from the distribution of $(X, Y)$, and let $\|\hat{T}_n - T(f)\|_G$ be a norm of $\hat{T}_n - T(f)$. Under appropriate regularity conditions, it is shown that the optimal rate of convergence for $\|\hat{T}_n - T(f)\|_G$ is $n^{-(p - q)/(2p + d)}$. The result is applied to differentiation, fractional differentiation and deconvolution.

Citation

Download Citation

Ja-Yong Koo. "Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems." Ann. Statist. 21 (2) 590 - 599, June, 1993. https://doi.org/10.1214/aos/1176349138

Information

Published: June, 1993
First available in Project Euclid: 12 April 2007

zbMATH: 0778.62040
MathSciNet: MR1232506
Digital Object Identifier: 10.1214/aos/1176349138

Subjects:
Primary: 62G20
Secondary: 62G05

Keywords: Inverse problems , method of presmoothing , Optimal rate of convergence , regression

Rights: Copyright © 1993 Institute of Mathematical Statistics

Vol.21 • No. 2 • June, 1993
Back to Top