Trade-offs between global and local risks in nonparametric function estimation

T. Tony Cai, Mark G. Low, and Linda H. Zhao

Full-text: Open access


The problem of loss adaptation is investigated: given a fixed parameter, the goal is to construct an estimator that adapts to the loss function in the sense that the estimator is optimal both globally and locally at every point. Given the class of estimator sequences that achieve the minimax rate, over a fixed Besov space, for estimating the entire function a lower bound is given on the performance for estimating the function at each point. This bound is larger by a logarithmic factor than the usual minimax rate for estimation at a point when the global and local minimax rates of convergence differ. A lower bound for the maximum global risk is given for estimators that achieve optimal minimax rates of convergence at every point. An inequality concerning estimation in a two-parameter statistical problem plays a key role in the proof. It can be considered as a generalization of an inequality due to Brown and Low. This may be of independent interest. A particular wavelet estimator is constructed which is globally optimal and which attains the lower bound for the local risk provided by our inequality.

Article information

Bernoulli, Volume 13, Number 1 (2007), 1-19.

First available in Project Euclid: 30 March 2007

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Besov class constrained risk inequality loss adaptation normal location–scale model nonparametric function estimation nonparametric regression superefficiency wavelets white noise


Cai, T. Tony; Low, Mark G.; Zhao, Linda H. Trade-offs between global and local risks in nonparametric function estimation. Bernoulli 13 (2007), no. 1, 1--19. doi:10.3150/07-BEJ5001.

Export citation