## The Annals of Mathematical Statistics

### Estimators of a Location Parameter in the Absolutely Continuous Case

R. H. Farrell

#### Abstract

In the last decade there have been a number of papers dealing with the admissibility of translation invariant estimators of a location parameter. Blyth [2] treated sequential procedures in the case of normally or rectangularly distributed random variables. If $d$ is estimated and $\theta$ is the actual parameter value, for Blyth, op. cit., loss was measured by $W(|d - \theta|)$ where $W(\cdot)$ was a nondecreasing function on $\lbrack 0, \infty)$. In the same year Blackwell [1] treated the fixed sample size problem in the case of discrete random variables taking only a finite number of values. For Blackwell, op. cit., loss was measured by $W(d - \theta)$ where $W(\cdot)$ was assumed continuous and bounded from below but otherwise arbitrary. Blackwell showed that if the discrete random variables (which could be vector valued) took values only on the integer lattice points and if there was a unique minimax translation invariant estimator then it was admissible. Later papers by Karlin [7] and Stein [11] discuss the admissibility of Pitman's estimator for square error. In reviewing these results we discovered that if the loss satisfied \begin{equation*}\tag{0.1}0 \leqq x < y \text{implies} W(x) < W(y),\quad y < x \leqq 0 \text{implies} W(x) < W(y),\end{equation*} and if there were several minimax translation invariant estimators then no translation invariant estimator could be admissible. This and a related result constitutes Section 4. It was logical to ask the converse question, does uniqueness imply admissibility? In the case of square error Pitman's estimator (except for changes on sets of measure zero) is necessarily unique since the loss function is strictly convex. In the case of normally distributed random variables and symmetrical loss functions as considered by Blyth, op. cit., the sample mean is the uniquely determined minimax translation invariant estimator. In this paper we have restricted the discussion to random variables which have density functions relative to Lebesgue measure. Since it proved possible to deal with the question of admissibility for a larger class of estimators than the translation invariant estimators, we define generalized Bayes estimators as the solution of a minimization problem. Section 2 deals with the question of the existence of measurable solutions to the minimization problem. Sections 5 and 6 deal with the question, does uniqueness imply admissibility? Section 3 deals with the question of whether generalized Bayes estimators are strongly consistent and shows that under mild restrictions this is the case. Section 8 is a generalization of the results of Katz [8] for minimax estimators of $\theta \varepsilon \lbrack 0, \infty)$. We show how to construct such estimators whenever loss is measured by $W(d - \theta), W(\cdot)$ strictly convex, non-negative, $W(0) = 0$. Blyth, op. cit., proved admissibility only within the class of continuous risk functions. In Section 9 we remove this restriction. We then show that if the loss function $W(\cdot)$ is strictly convex and symmetrical then the sample mean based on $n$ observations is admissible, in the nonparametric context of estimating the mean of an unknown density function, within the class of all sequential procedures having expected sample size $\leqq n$.

#### Article information

Source
Ann. Math. Statist., Volume 35, Number 3 (1964), 949-998.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177700516

Digital Object Identifier
doi:10.1214/aoms/1177700516

Mathematical Reviews number (MathSciNet)
MR171359

Zentralblatt MATH identifier
0223.62014

JSTOR