Open Access
November, 1991 Choosing a Kernel Regression Estimator
C.-K. Chu, J. S. Marron
Statist. Sci. 6(4): 404-419 (November, 1991). DOI: 10.1214/ss/1177011586

Abstract

For nonparametric regression, there are two popular methods for constructing kernel estimators, involving choosing weights either by direct kernel evaluation or by the convolution of the kernel with a histogram representing the data. There is an extensive literature concerning both of these estimators, but a comparatively small amount of thought has been given to the choice between them. The few papers that do treat both types of estimator tend to present only one side of the pertinent issues. The purpose of this paper is to present a balanced discussion, at an intuitive level, of the differences between the estimators, to allow users of nonparametric regression to rationally make this choice for themselves. While these estimators give very nearly the same performance in the case of a fixed, essentially equally spaced design, their performance is quite different when there are serious departures from equal spacing, or when the design points are randomly chosen. Each of the estimators has several important advantages and disadvantages, so the choice of "best" is a personal one, which should depend on the particular estimation setting at hand.

Citation

Download Citation

C.-K. Chu. J. S. Marron. "Choosing a Kernel Regression Estimator." Statist. Sci. 6 (4) 404 - 419, November, 1991. https://doi.org/10.1214/ss/1177011586

Information

Published: November, 1991
First available in Project Euclid: 19 April 2007

zbMATH: 0955.62561
MathSciNet: MR1146907
Digital Object Identifier: 10.1214/ss/1177011586

Keywords: asymptotic variance , design points , kernel estimators , Nonparametric regression

Rights: Copyright © 1991 Institute of Mathematical Statistics

Vol.6 • No. 4 • November, 1991
Back to Top