Statistical Science

Comment: Models Are Approximations!

Anthony C. Davison, Erwan Koch, and Jonathan Koh

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

This discussion focuses on areas of disagreement with the papers, particularly the target of inference and the case for using the robust ‘sandwich’ variance estimator in the presence of moderate mis-specification. We also suggest that existing procedures may be appreciably more powerful for detecting mis-specification than the authors’ RAV statistic, and comment on the use of the pairs bootstrap in balanced situations.

Article information

Source
Statist. Sci., Volume 34, Number 4 (2019), 584-590.

Dates
First available in Project Euclid: 8 January 2020

Permanent link to this document
https://projecteuclid.org/euclid.ss/1578474023

Digital Object Identifier
doi:10.1214/19-STS746

Mathematical Reviews number (MathSciNet)
MR4048589

Keywords
Bootstrap designed experiment infinitesimal jackknife model mis-specification regression diagnostics sandwich variance estimator

Citation

Davison, Anthony C.; Koch, Erwan; Koh, Jonathan. Comment: Models Are Approximations!. Statist. Sci. 34 (2019), no. 4, 584--590. doi:10.1214/19-STS746. https://projecteuclid.org/euclid.ss/1578474023


Export citation

References

  • Cook, R. D. and Weisberg, S. (1983). Diagnostics for heteroscedasticity in regression. Biometrika 70 1–10.
  • Davison, A. C. and Hinkley, D. V. (1997). Bootstrap Methods and Their Application. Cambridge Series in Statistical and Probabilistic Mathematics 1. Cambridge Univ. Press, Cambridge.
  • Efron, B. and Stein, C. (1981). The jackknife estimate of variance. Ann. Statist. 9 586–596.
  • Fernholz, L. T. (1983). Von Mises Calculus for Statistical Functionals. Lecture Notes in Statistics 19. Springer, New York.
  • Fisher, R. A. (1935). The Design of Experiments. Oliver and Boyd, Edinburgh.
  • Fox, T., Hinkley, D. and Larntz, K. (1980). Jackknifing in nonlinear regression. Technometrics 22 29–33.
  • Hampel, F. R. (1974). The influence curve and its role in robust estimation. J. Amer. Statist. Assoc. 69 383–393.
  • Hampel, F. R., Ronchetti, E. M., Rousseeuw, P. J. and Stahel, W. A. (1986). Robust Statistics: The Approach Based on Influence Functions. Wiley, New York.
  • Hinkley, D. V. (1977). Jacknifing in unbalanced situations. Technometrics 19 285–292.
  • Hinkley, D. V. and Wang, S. (1991). Efficiency of robust standard errors for regression coefficients. Comm. Statist. Theory Methods 20 1–11.
  • Kalbfleisch, J. D. (1975). Sufficiency and conditionality. Biometrika 62 251–268.
  • Panaretos, V. M., Kraus, D. and Maddocks, J. H. (2010). Second-order comparison of Gaussian random functions and the geometry of DNA minicircles. J. Amer. Statist. Assoc. 105 670–682.
  • Tukey, J. W. (1949). One degree of freedom for non-additivity. Biometrics 5 232–242.

See also

  • Main article: Models as Approximations I: Consequences Illustrated with Linear Regression.
  • Main article: Models as Approximations II: A Model-Free Theory of Parametric Regression.