## The Annals of Mathematical Statistics

### On Mixed Single Sample Experiments

Leonard Cohen

#### Abstract

William Kruskal [1], Howard Raiffa [2], J. L. Hodges, Jr. and E. L. Lehmann [4], have shown that in certain Neyman-Pearson type problems of testing a simple hypothesis against a simple alternative, determining the sample size by means of a chance device yields improvements over fixed sample size procedures. The purpose of this paper is not only to investigate the general problem of randomizing over fixed sample size tests of a simple hypothesis against a simple alternative, but also randomizing over other fixed sample size procedures in topics such as confidence interval estimation, the $k$-decision problem, etc. In Section 2, a fixed sample size test of a simple hypothesis against a simple alternative is identified with an operating characteristic $(\alpha, \beta, n)$ where $\alpha$ denotes the probability of a type I error, $\beta$ denotes the probability of a type II error, and $n$ denotes the sample size. A mixed single sample test is defined as a sequence of quadruples. $(\gamma_i, \alpha_i, \beta_i, n_i)$, where $\gamma_i \geqq 0, \sum^\infty_{i = 1}\gamma_i = 1$, where $(\alpha_i, \beta_i, \eta_i)$ is a fixed sample size test and where $\gamma_i$ is interpreted as the probability of using the fixed sample size test $(\alpha_i, \beta_i, n_i)$ for $i = 1, 2, \cdots$. A mixed single sample test is identified with an operating characteristic $(\alpha, \beta, n) = \sum^\infty_{i = 1}\gamma_i(\alpha_i, \beta_i, n_i)$. For each nonnegative integer $n$, the class $A_n$ of admissible fixed sample size procedures of sample size $n$ is defined in an obvious way. We define $A = \bigcup^\infty_{i = 0}A_i$ and $A^{\ast}$ as the convex hull of $A$. It is not necessarily true that $A^{\ast}$ is closed. An example is given to show this. However, it is true that the lower boundary of $A^{\ast}$ is a subset of $A^{\ast}$ so that the lower boundary of $A^{\ast}$ determines a minimally complete class, $\mathcal{a}$, of mixed single sample tests. The tests in $\mathcal{a}$ are characterized from a Bayes point of view and a technique for constructing the tests in $\mathcal{a}$ is given. In Section 3, the technique is applied to tests on the mean of a normal distribution with known variance. It is shown that the tests in $\mathcal{a}$ are either (a) fixed sample size tests, or (b) mixtures of at most two fixed sample size tests. It is shown that there exists a minimal subset $\mathcal{a}_0$ of $A$ such that all improved randomized procedures are of the form $(\alpha, \beta, n) = \gamma(0, 1, 0) + (1 - \gamma)(\alpha_0, \beta_0, n_0)$ or $(\alpha, \beta, n) = \gamma(1, 0, 0) + (1 - \gamma)(\alpha_0, \beta_0, n_0)$, where $0 < \gamma < 1$ and where $(\alpha_0, \beta_0, n_0) \varepsilon \mathcal{a}_0$. It is then shown how to construct $\mathcal{a}_0$. The following problems (of the Neyman-Pearson type) are solved: (a) Given $\alpha$ and $\beta$, how can we find the test in $\mathcal{a}$ with the given $\alpha$ and $\beta$? (b) Given $\alpha$ and $n$, how can we find the test in $\mathcal{a}$ with the given $\alpha$ and $n$? Numerical examples are worked out. In Section 4, the technique is applied to tests on the mean of a binomial distribution. Although no general results were obtained, numerical examples of interest are given. In Section 5, the technique is applied to tests on the range of a rectangular distribution (when one end point is known). It is shown that if $\alpha > 0, n > 0$, and $(\alpha, \beta, n) \varepsilon A_n$, then $(\alpha, \beta, n) \not\in \mathcal{a}$. The tests in $\mathcal{a}$ are characterized by a simple equation which makes it easy to (a) determine whether a given point $(\alpha, \beta, n)$ belongs to $\mathcal{a}$, and (b) construct any test in $\mathcal{a}$, given two of the three coordinates. It is shown that if $(\alpha, \beta, n) \varepsilon A_n$, then there exists a test $(\alpha, \beta, n')$ in $\mathcal{a}$ such that $n' = (1 - \alpha)n$. Hence, the fractional saving in the expected sample size achieved by randomization is equal to $\alpha$. In Section 6, it is shown that in tests on the mean of a rectangular distribution (with known range), it never pays to randomize. In Section 7, confidence intervals are evaluated in terms of confidence coefficient $(\alpha)$, expected length $(L)$ and expected sample size $(n)$. For the problem of obtaining a confidence interval for the mean of a normal distribution with known variance, "improved" randomized procedures exist and are of the form $(\alpha, L, n) = \gamma(0, 0, 0) + (1 - \gamma)(\alpha', L', n')$ where $0 < \gamma < 1$ and where $(\alpha', L', n')$ is a fixed sample size confidence interval procedure. Clearly, the randomized procedures obtained are of such a nature that the question of confidence intervals evaluated in terms of expected length and/or expected sample size is thrown open to discussion. In Section 8, the $k$-decision problem is discussed. It is shown that improvements can be obtained by randomization. In Section 9, the problem of applying mixed single sample tests of a composite hypothesis against a composite alternative is discussed. In Section 10, mixed single sample procedures are compared to Wald's sequential probability ratio test in the problem of tests on the range of a rectangular distribution when one endpoint is known and are shown to be efficient in a certain sense. In Section 11, the estimation problem is mentioned. It is shown that in most practical problems, fixed sample size procedures are optimal. In Section 12, applications of mixed single sample tests are discussed.

#### Article information

Source
Ann. Math. Statist., Volume 29, Number 4 (1958), 947-971.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177706435

Digital Object Identifier
doi:10.1214/aoms/1177706435

Mathematical Reviews number (MathSciNet)
MR99736

Zentralblatt MATH identifier
0094.14003

JSTOR