Open Access
August, 1966 The Sequential Compound Decision Problem with $m \times n$ Finite Loss Matrix
J. Van Ryzin
Ann. Math. Statist. 37(4): 954-975 (August, 1966). DOI: 10.1214/aoms/1177699376

Abstract

Consideration of a sequence of statistical decision problems having identical generic structure constitutes a sequential compound decision problem. The risk of a sequential compound decision problem is defined as the average risk of the component problems. In the case where the component decisions are between two fully specified distributions $P_1$ and $P_2, P_1 \neq P_2$, Samuel (Theorem 2 of [9]) gives a sequential decision function whose risk is bounded from above by the risk of a best "simple" procedure based on knowing the proportion of component problems in which $P_2$ is the governing distribution plus a sequence of positive numbers converging to zero uniformly in the space of parameter-valued sequences as the number of problems increases. Related results are abstracted by Hannan in [2] for the sequential compound decision problem where the parameter space in the component problem is finite. The decision procedures in both instances rely on the technique of "artificial randomization," which was introduced and effectively used by Hannan in [1] for sequential games in which player I's space is finite. In the game situation such randomization is necessary. However, in the compound decision problem such "artificial randomization" is not necessary as is shown in this paper. Specifically, we consider the case where each component problem consists of making one of $n$ decisions based on an observation from one of $m$ distributions. Theorems 4.1, 4.2, and 4.3 give upper bounds for the difference in the risks (the regret function) of certain sequential compound decision procedures and a best "simple" procedure which is Bayes against the empirical distribution on the component problem parameter space. None of the sequential procedures presented depend on "artificial randomization." The upper bounds in these three theorems are all of order $N^{-\frac{1}{2}}$ and are uniform in the parameter-valued sequences. All procedures depend at stage $k$ on substitution of estimates of the $k - 1$st (or $k$th) stage empirical distribution $p_{k-1}$ (or $p_k$) on the component parameter space into a Bayes solution of the component problem with respect to $p_{k-1}$ (or $p_k$). Theorem 4.1 (except in the case where the estimates are degenerate) and Theorem 4.3 when specialized to the compound testing case between $P_1$ and $P_2$ (Theorems 5.1 and 5.2) yield a threefold improvement of Samuel's results mentioned above by simultaneously eliminating the "artificial randomization," by improving the convergence rate of the upper bound of the regret function to $N^{-\frac{1}{2}}$, and by widening the class of estimates. Higher order uniform bounds on the regret function in the sequential compound testing problem are also given. The bounds in Theorems 5.3 and 5.4 (or Theorems 5.5 and 5.6) are respectively of $O((\log N)N^{-1})$ and $o(N^{-\frac{1}{2}})$ and are attained by imposing suitable continuity assumptions on the induced distribution of a certain function of the likelihood ratio of $P_1$ and $P_2$. Theorem 6.1 extends Theorems 4.1, 4.2, and 4.3 to the related "empirical Bayes" problem. Also lower bounds of equivalent or better order are given for all theorems. The next section introduces notation and preliminaries to be used in this paper and in the following paper [15].

Citation

Download Citation

J. Van Ryzin. "The Sequential Compound Decision Problem with $m \times n$ Finite Loss Matrix." Ann. Math. Statist. 37 (4) 954 - 975, August, 1966. https://doi.org/10.1214/aoms/1177699376

Information

Published: August, 1966
First available in Project Euclid: 27 April 2007

zbMATH: 0173.46303
MathSciNet: MR198640
Digital Object Identifier: 10.1214/aoms/1177699376

Rights: Copyright © 1966 Institute of Mathematical Statistics

Vol.37 • No. 4 • August, 1966
Back to Top