The Annals of Statistics

Goodness-of-fit tests via phi-divergences

Leah Jager and Jon A. Wellner

Full-text: Open access

Abstract

A unified family of goodness-of-fit tests based on φ-divergences is introduced and studied. The new family of test statistics Sn(s) includes both the supremum version of the Anderson–Darling statistic and the test statistic of Berk and Jones [Z. Wahrsch. Verw. Gebiete 47 (1979) 47–59] as special cases (s=2 and s=1, resp.). We also introduce integral versions of the new statistics.

We show that the asymptotic null distribution theory of Berk and Jones [Z. Wahrsch. Verw. Gebiete 47 (1979) 47–59] and Wellner and Koltchinskii [High Dimensional Probability III (2003) 321–332. Birkhäuser, Basel] for the Berk–Jones statistic applies to the whole family of statistics Sn(s) with s∈[−1, 2]. On the side of power behavior, we study the test statistics under fixed alternatives and give extensions of the “Poisson boundary” phenomena noted by Berk and Jones for their statistic. We also extend the results of Donoho and Jin [Ann. Statist. 32 (2004) 962–994] by showing that all our new tests for s∈[−1, 2] have the same “optimal detection boundary” for normal shift mixture alternatives as Tukey’s “higher-criticism” statistic and the Berk–Jones statistic.

Article information

Source
Ann. Statist., Volume 35, Number 5 (2007), 2018-2053.

Dates
First available in Project Euclid: 7 November 2007

Permanent link to this document
https://projecteuclid.org/euclid.aos/1194461721

Digital Object Identifier
doi:10.1214/0009053607000000244

Mathematical Reviews number (MathSciNet)
MR2363962

Zentralblatt MATH identifier
1126.62030

Subjects
Primary: 62G10: Hypothesis testing 62G20: Asymptotic properties
Secondary: 62G30: Order statistics; empirical distribution functions

Keywords
Alternatives combining p-values confidence bands goodness-of-fit Hellinger large deviations multiple comparisons normalized empirical process phi-divergence Poisson boundaries

Citation

Jager, Leah; Wellner, Jon A. Goodness-of-fit tests via phi-divergences. Ann. Statist. 35 (2007), no. 5, 2018--2053. doi:10.1214/0009053607000000244. https://projecteuclid.org/euclid.aos/1194461721


Export citation

References

  • Abrahamson, I. G. (1967). Exact Bahadur efficiencies for the Kolmogorov–Smirnov and Kuiper one- and two-sample statistics. Ann. Math. Statist. 38 1475–1490.
  • Ali, S. M. and Silvey, S. D. (1966). A general class of coefficients of divergence of one distribution from another. J. Roy. Statist. Soc. Ser. B 28 131–142.
  • Anderson, T. W. and Darling, D. A. (1952). Asymptotic theory of certain “goodness of fit” criteria based on stochastic processes. Ann. Math. Statist. 23 193–212.
  • Berk, R. H. and Jones, D. H. (1978). Relatively optimal combinations of test statistics. Scand. J. Statist. 5 158–162.
  • Berk, R. H. and Jones, D. H. (1979). Goodness-of-fit test statistics that dominate the Kolmogorov statistics. Z. Wahrsch. Verw. Gebiete 47 47–59.
  • Bickel, P. J. and Rosenblatt, M. (1973). On some global measures of the deviations of density function estimates. Ann. Statist. 1 1071–1095.
  • Cai, T. T., Jin, J. and Low, M. G. (2005). Estimation and confidence sets for sparse normal mixtures. Ann. Statist. To appear.
  • Cayón, L., Jin, J. and Treaster, A. (2005). Higher criticism statistic: Detecting and identifying non-Gaussianity in the WMAP first-year data. Monthly Notices Royal Astronomical Soc. 362 826–832.
  • Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist. 23 493–507.
  • Cressie, N. and Read, T. R. C. (1984). Multinomial goodness-of-fit tests. J. Roy. Statist. Soc. Ser. B 46 440–464.
  • Csiszár, I. (1963). Eine informationstheoretische Ungleichung und ihre Anwendung auf den Beweis der Ergodizität von Markoffschen Ketten. Magyar Tud. Akad. Mat. Kutató Int. Közl. 8 85–108.
  • Csiszár, I. (1967). Information-type measures of difference of probability distributions and indirect observations. Studia Sci. Math. Hungar. 2 299–318.
  • D'Agostino, R. B. and Stephens, M. A. (1986). Goodness-of-fit Techniques. Dekker, New York.
  • Darling, D. A. and Erdös, P. (1956). A limit theorem for the maximum of normalized sums of independent random variables. Duke Math. J. 23 143–155.
  • Donoho, D. and Jin, J. (2004). Higher criticism for detecting sparse heterogeneous mixtures. Ann. Statist. 32 962–994.
  • Durbin, J., Knott, M. and Taylor, C. C. (1975). Components of Cramér–von Mises statistics. II. J. Roy. Statist. Soc. Ser. B 37 216–237.
  • Eicker, F. (1979). The asymptotic distribution of the suprema of the standardized empirical processes. Ann. Statist. 7 116–138.
  • Einmahl, J. H. J. and McKeague, I. W. (2003). Empirical likelihood based hypothesis testing. Bernoulli 9 267–290.
  • Groeneboom, P. and Shorack, G. R. (1981). Large deviations of goodness of fit statistics and linear combinations of order statistics. Ann. Probab. 9 971–987.
  • Ingster, Y. I. (1997). Some problems of hypothesis testing leading to infinitely divisible distributions. Math. Methods Statist. 6 47–69.
  • Ingster, Y. I. (1998). Minimax detection of a signal for $l\sp n$-balls. Math. Methods Statist. 7 401–428.
  • Jaeschke, D. (1979). The asymptotic distribution of the supremum of the standardized empirical distribution function on subintervals. Ann. Statist. 7 108–115.
  • Jager, L. (2006). Goodness-of-fit statistics based on phi-divergences. Technical report, Dept. Statistics, Univ. Washington.
  • Jager, L. and Wellner, J. A. (2004). A new goodness of fit test: The reversed Berk–Jones statistic. Technical report, Dept. Statistics, Univ. Washington.
  • Jager, L. and Wellner, J. A. (2004). On the “Poisson boundaries” of the family of weighted Kolmogorov statistics. In A Festschrift for Herman Rubin (A. DasGupta, ed.) 319–331. IMS, Beachwood, OH.
  • Jager, L. and Wellner, J. A. (2006). Goodness-of-fit tests via phi-divergences. Technical report, Dept. Statistics, Univ. Washington.
  • Janssen, A. (2000). Global power functions of goodness of fit tests. Ann. Statist. 28 239–253.
  • Jin, J. (2004). Detecting a target in very noisy data from multiple looks. In A Festschrift for Herman Rubin (A. DasGupta, ed.) 255–286. IMS, Beachwood, OH.
  • Kallenberg, O. (1997). Foundations of Modern Probability. Springer, New York.
  • Khmaladze, E. V. (1998). Goodness of fit tests for “chimeric” alternatives. Statist. Neerlandica 52 90–111.
  • Khmaladze, E. and Shinjikashvili, E. (2001). Calculation of noncrossing probabilities for Poisson processes and its corollaries. Adv. in Appl. Probab. 33 702–716.
  • Liese, F. and Vajda, I. (1987). Convex Statistical Distances. Teubner, Leipzig.
  • Meinshausen, N. and Rice, J. (2006). Estimating the proportion of false null hypotheses among a large number of independently tested hypotheses. Ann. Statist. 34 373–393.
  • Nikitin, Y. (1995). Asymptotic Efficiency of Nonparametric Tests. Cambridge Univ. Press.
  • Noé, M. (1972). The calculation of distributions of two-sided Kolmogorov–Smirnov type statistics. Ann. Math. Statist. 43 58–64.
  • Owen, A. B. (1995). Nonparametric likelihood confidence bands for a distribution function. J. Amer. Statist. Assoc. 90 516–521.
  • Révész, P. (1982/83). A joint study of the Kolmogorov–Smirnov and the Eicker–Jaeschke statistics. Statist. Decisions 1 57–65.
  • Shorack, G. R. and Wellner, J. A. (1986). Empirical Processes with Applications to Statistics. Wiley, New York.
  • Vajda, I. (1989). Theory of Statistical Inference and Information. Kluwer, Dordrecht.
  • Wellner, J. A. (1977). Distributions related to linear bounds for the empirical distribution function. Ann. Statist. 5 1003–1016.
  • Wellner, J. A. (1977). A Glivenko–Cantelli theorem and strong laws of large numbers for functions of order statistics. Ann. Statist. 5 473–480.
  • Wellner, J. A. (1978). Limit theorems for the ratio of the empirical distribution function to the true distribution function. Z. Wahrsch. Verw. Gebiete 45 73–88.
  • Wellner, J. A. and Koltchinskii, V. (2003). A note on the asymptotic distribution of Berk–Jones type statistics under the null hypothesis. In High Dimensional Probability III (J. Hoffmann-Jørgensen, M. B. Marcus and J. A. Wellner, eds.) 321–332. Birkhäuser, Basel.