Open Access
February 2003 Could Fisher, Jeffreys and Neyman Have Agreed on Testing?
James O. Berger
Statist. Sci. 18(1): 1-32 (February 2003). DOI: 10.1214/ss/1056397485

Abstract

Ronald Fisher advocated testing using p-values, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is that the three approaches can lead to quite different practical conclusions.

This article focuses on discussion of the conditional frequentist approach to testing, which is argued to provide the basis for a methodological unification of the approaches of Fisher, Jeffreys and Neyman. The idea is to follow Fisher in using p-values to define the "strength of evidence" in data and to follow his approach of conditioning on strength of evidence; then follow Neyman by computing Type I and Type II error probabilities, but do so conditional on the strength of evidence in the data. The resulting conditional frequentist error probabilities equal the objective posterior probabilities of the hypotheses advocated by Jeffreys.

Citation

Download Citation

James O. Berger. "Could Fisher, Jeffreys and Neyman Have Agreed on Testing?." Statist. Sci. 18 (1) 1 - 32, February 2003. https://doi.org/10.1214/ss/1056397485

Information

Published: February 2003
First available in Project Euclid: 23 June 2003

zbMATH: 1048.62006
MathSciNet: MR1997064
Digital Object Identifier: 10.1214/ss/1056397485

Keywords: conditional testing. , posterior probabilities of hypotheses , P-values , Type I and Type II error probabilities

Rights: Copyright © 2003 Institute of Mathematical Statistics

Vol.18 • No. 1 • February 2003
Back to Top