Open Access
December, 1991 Asymptotic Optimality of the Fast Randomized Versions of GCV and $C_L$ in Ridge Regression and Regularization
Didier A. Girard
Ann. Statist. 19(4): 1950-1963 (December, 1991). DOI: 10.1214/aos/1176348380


Ridge regression is a well-known technique to estimate the coefficients of a linear model. The method of regularization is a similar approach commonly used to solve underdetermined linear equations with discrete noisy data. When applying such a technique, the choice of the smoothing (or regularization) parameter $h$ is crucial. Generalized cross-validation (GCV) and Mallows' $C_L$ are two popular methods for estimating a good value for $h,$ from the data. Their asymptotic properties, such as consistency and asymptotic optimality, have been largely studied [Craven and Wahba (1979); Golub, Heath and Wahba (1979); Speckman (1985)]. Very interesting convergence results for the actual (random) parameter given by GCV and $C_L$ have been shown by Li (1985, 1986). Recently, Girard (1987, 1989) has proposed fast randomized versions of GCV and $C_L.$ The purpose of this paper is to show that the above convergence results also hold for these new methods.


Download Citation

Didier A. Girard. "Asymptotic Optimality of the Fast Randomized Versions of GCV and $C_L$ in Ridge Regression and Regularization." Ann. Statist. 19 (4) 1950 - 1963, December, 1991.


Published: December, 1991
First available in Project Euclid: 12 April 2007

zbMATH: 0754.62030
MathSciNet: MR1135158
Digital Object Identifier: 10.1214/aos/1176348380

Primary: 62G05
Secondary: 65D10 , 65R20 , 65U05 , 92A07

Keywords: $C_L$ , asymptotic optimality , GCV , Monte Carlo techniques , randomized versions , regularization , Ridge regression , smoothing splines

Rights: Copyright © 1991 Institute of Mathematical Statistics

Vol.19 • No. 4 • December, 1991
Back to Top