The Annals of Statistics

Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems

Ja-Yong Koo

Full-text: Open access

Abstract

Consider an unknown regression function $f$ of the response $Y$ on a $d$-dimensional measurement variable $X$. It is assumed that $f$ belongs to a class of functions having a smoothness measure $p$. Let $T$ denote a known linear operator of order $q$ which maps $f$ to another function $T(f)$ in a space $G$. Let $\hat{T}_n$ denote an estimator of $T(f)$ based on a random sample of size $n$ from the distribution of $(X, Y)$, and let $\|\hat{T}_n - T(f)\|_G$ be a norm of $\hat{T}_n - T(f)$. Under appropriate regularity conditions, it is shown that the optimal rate of convergence for $\|\hat{T}_n - T(f)\|_G$ is $n^{-(p - q)/(2p + d)}$. The result is applied to differentiation, fractional differentiation and deconvolution.

Article information

Source
Ann. Statist., Volume 21, Number 2 (1993), 590-599.

Dates
First available in Project Euclid: 12 April 2007

Permanent link to this document
https://projecteuclid.org/euclid.aos/1176349138

Digital Object Identifier
doi:10.1214/aos/1176349138

Mathematical Reviews number (MathSciNet)
MR1232506

Zentralblatt MATH identifier
0778.62040

JSTOR
links.jstor.org

Subjects
Primary: 62G20: Asymptotic properties
Secondary: 62G05: Estimation

Keywords
Regression inverse problems method of presmoothing optimal rate of convergence

Citation

Koo, Ja-Yong. Optimal Rates of Convergence for Nonparametric Statistical Inverse Problems. Ann. Statist. 21 (1993), no. 2, 590--599. doi:10.1214/aos/1176349138. https://projecteuclid.org/euclid.aos/1176349138


Export citation