## The Annals of Statistics

### Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems

Ja-Yong Koo

#### Abstract

Consider an unknown regression function $f$ of the response $Y$ on a $d$-dimensional measurement variable $X$. It is assumed that $f$ belongs to a class of functions having a smoothness measure $p$. Let $T$ denote a known linear operator of order $q$ which maps $f$ to another function $T(f)$ in a space $G$. Let $\hat{T}_n$ denote an estimator of $T(f)$ based on a random sample of size $n$ from the distribution of $(X, Y)$, and let $\|\hat{T}_n - T(f)\|_G$ be a norm of $\hat{T}_n - T(f)$. Under appropriate regularity conditions, it is shown that the optimal rate of convergence for $\|\hat{T}_n - T(f)\|_G$ is $n^{-(p - q)/(2p + d)}$. The result is applied to differentiation, fractional differentiation and deconvolution.

#### Article information

Source
Ann. Statist., Volume 21, Number 2 (1993), 590-599.

Dates
First available in Project Euclid: 12 April 2007

https://projecteuclid.org/euclid.aos/1176349138

Digital Object Identifier
doi:10.1214/aos/1176349138

Mathematical Reviews number (MathSciNet)
MR1232506

Zentralblatt MATH identifier
0778.62040

JSTOR