## The Annals of Mathematical Statistics

### Contributions to the Theory of Sequential Analysis. I

M. A. Girshick

#### Abstract

Given two populations $\pi_1$ and $\pi_2$ each characterized by a distribution density $f(x, \theta)$ which is assumed to be known, except for the value of the parameter $\theta$. It is desired to test the composite hypothesis $\theta_1 < \theta_2$ against the alternative hypothesis $\theta_1 > \theta_2$ where $\theta_i$ is the value of the parameter in the distribution density of $\pi_i, (i = 1, 2)$. The criterion proposed for testing this hypothesis is based on the sequential probability ratio and consists of the following: Choose two positive constants $a$ and $b$ and two values of $\theta$, say $\theta^0_1$ and $\theta^0_2$. Take pairs of observations $x_{1\alpha}$ from $\pi_1$ and $x_{2\alpha}$ from $\pi_2, (\alpha = 1,2, \ldots)$, in sequence and compute $Z_j = \sum^j_{\alpha = 1} z_\alpha$ where $z_\alpha = \log \big\lbrack \frac{f(x_{2\alpha}, \theta^0_1)f(x_{1\alpha}, \theta^0_2)} {f(x_{2\alpha}, \theta^0_2)f(x_{1\alpha}, \theta^0_1)}\big\rbrack.$ The hypothesis tested is accepted or rejected depending on whether $Z_n \geq a$ or $Z_n \leq - b$ where $n$ is the smallest integer $j$ for which either one of these relationships is satisfied. The boundaries $a$ and $b$ are partly given in terms of the desired risks of making an erroneous decision. The values $\theta^0_1$ and $\theta^0_2$ define the magnitude of the difference between the values of $\theta$ in $\pi_1$ and in $\pi_2$ which is considered worth detecting. It is shown that the power of this test is constant on a curve $h(\theta_1, \theta_2) =$ constant. If $E\big(\log \frac{f(x, \theta^0_2)}{f(x, \theta^0_1)}\big)$ is a monotonic function of $\theta$, then the test is unbiased in the sense that all points $(\theta_1, \theta_2)$ which lie on the curve $h(\theta_1, \theta_2) =$ constant are such that either every $\theta_1 < \theta_2$ or every $\theta_1 > \theta_2$. For a large class of known distributions the quantity $h$ is shown to be an appropriate measure of the difference between $\theta_1$ and $\theta_2$ and the test procedure for this class of distributions is simple and intuitively sensible. For the case of the binomial, the exact power of this test as well as the distribution of $n$ is given.

#### Article information

Source
Ann. Math. Statist., Volume 17, Number 2 (1946), 123-143.

Dates
First available in Project Euclid: 28 April 2007

https://projecteuclid.org/euclid.aoms/1177730976

Digital Object Identifier
doi:10.1214/aoms/1177730976

Mathematical Reviews number (MathSciNet)
MR16623

Zentralblatt MATH identifier
0063.01636

JSTOR