## Annals of Mathematical Statistics

- Ann. Math. Statist.
- Volume 42, Number 4 (1971), 1187-1205.

### Confidence Intervals for Linear Functions of the Normal Mean and Variance

#### Abstract

If $Y = g(X)$ is normal $(\mu, \sigma^2)$, where $g$ is a one-to-one real function and $X$ is a random variable whose expectation exists, we may write $EX = f(\mu, \sigma^2)$. The practical importance of this observation is that we often are concerned with testing hypotheses about, and constructing confidence intervals for, known functions of both the mean and variance of a normal distribution. This may happen when we use a statistical model, such as the lognormal distribution, that is related to the normal distribution by a transformation of variables. A slightly different case occurs when a transformation of data is made before applying a statistical method, such as analysis of variance or regression analysis, that involves the assumption of mormality for the transformed data. Some familiar examples in this context are $\mathrm{(i)} Y = X^{\frac{1}{2}}, EX = \mu^2 + \sigma^2$; $\mathrm{(ii)} Y = X^{\frac{1}{3}}, EX = \mu^3 + 3\mu\sigma^2$; $\mathrm{(iii)} Y = \arcsin (X^{\frac{1}{2}}), EX = \frac{1}{2}(1 - \cos(2\mu) \exp (-2\sigma^2))$; $\mathrm{(iv)} Y = \operatorname{arcsinh} (X^{\frac{1}{2}}), EX = \frac{1}{2}(\cosh(2\mu) \exp (2\sigma^2) -1)$; $\mathrm{(v)} Y = \log(X), EX = \exp (\mu + \frac{1}{2}\sigma^2)$. The theory of statistical inference in terms of $\mu = EY$ alone or $\sigma^2 = \operatorname{Var} Y$ alone is not easily extended to problems of inference in terms of $EX$ or $\operatorname{Var} X$, parametric functions of both $\mu$ and $\sigma^2$. Minimum variance unbiased estimators (MVUE's) for $EX$ and $\operatorname{Var} X$ were obtained by Finney (1941) for the case $Y = \log X$. Solutions for a much wider class of transformations were obtained by Neyman and Scott (1960) and Hoyle (1968). However there have been no analogous achievements with respect to hypothesis tests and confidence interval estimates for $EX$ and $\operatorname{Var} X$. The present paper, in which uniformly most accurate unbiased confidence interval procedures of level $1 - \alpha$ are derived for linear functions of $\mu$ and $\sigma^2$, is an approach to these problems. The results of this paper define an optimal solution for $EX$ when $Y = \log X$, since in this case the parametric function of interest is a monotone function of $\mu + \frac{1}{2}\sigma^2$. The results also provide a basis for approximate confidence interval solutions for other parametric functions of $\mu$ and $\sigma^2$. It is helpful to consider the problem in terms of confidence regions in the half-plane of points $(\mu, \sigma^2)$. For any transformation $Y = g(X)$ likely to be of practical significance, a confidence interval for $f(\mu, \sigma^2) = EX$ or $\operatorname{Var} X$ is a region in this half-plane, bounded by one or two contours of the form $f(\mu, \sigma^2) = m$. Kanofsky (1969) has proposed a method of simultaneous confidence estimation for all functions of $\mu$ and $\sigma^2$. He constructs a trapezoidal-shaped confidence region of level $1 - \alpha$ for $\mu$ and $\sigma^2$, and for an arbitrary function $h(\mu, \sigma^2)$, defines a confidence set for this function as the set of values $m$ such that the curve $h(\mu, \sigma^2) = m$ intersects this confidence region. If one is only interested in a single function, the procedure is conservative. However, for most such functions this is the only method based on exact distribution theory, to my knowledge, that has been proposed. The usual approach to confidence interval estimation for $EX$ or $\operatorname{Var} X$ has been to rely on approximate methods. For example, a common method of confidence interval estimation for $EX$ is to transform a level $1 - \alpha$ confidence interval for $EY = E(g(X))$, say $(\mu_1, \mu_2)$, by the inverse transform. Then $(g^{-1}(\mu_1), g^{-1}(\mu_2))$ would be an approximate level $1 - \alpha$ confidence interval for $EX$ if $g$ is monotone increasing. More sophisticated versions of this method have been proposed by Patterson (1966) and Hoyle (1968). A more direct approach is to use an estimator $T$ of $f(\mu, \sigma^2)$ and an estimator $V$ of the variance of $T. T$ is then assumed to be approximately normally distributed with mean $(f(\mu, \sigma^2)$ and variance equal to the observed value of $V$. For example, the sample mean $\bar{X}$ is an estimate of $EX$, and $S_X^2/(n(n - 1)) = \sum(X_i - \bar{X})^2/(n(n - 1))$ is an estimate of the variance of $\bar{X}$ (e.g., see Aitchison and Brown (1957) Section 5.62). Hoyle (1968) has suggested letting $T$ be the MVUE of $EX$, and $V$ the MVUE of $\operatorname{Var} T$, which he has given for a number of transformations. In this paper an optimal exact confidence interval procedure is presented for linear functions of $\mu$ and $\sigma^2$. That is, the procedure gives uniformly most accurate unbiased joint confidence regions of level $1 - \alpha$ for $\mu$ and $\sigma^2$, bounded by one or two contours of form $\mu + \lambda\sigma^2 = m$, for arbitrary $\lambda$. This provides an optimal confidence interval procedure for $EX$ when $Y = \log (X)$ is normal. Also it provides the basis for a new approximate confidence interval method for $EX$ in the general case $Y = g(X)$. That is, by a proper choice of the linear coefficient $\lambda$, it seems reasonable that a confidence region bounded by one or two contours of form $f(\mu, \sigma^2) = m$ might be approximated with some success by a confidence region bounded by contours of the form $\mu + \lambda\sigma^2 = m$. Certainly the degree of approximation possible should be better than that obtainable using only vertical bounding contours, as when a confidence interval for $\mu$ is transformed to give an approximate confidence interval for $EX$. Also, if the contours $f(\mu, \sigma^2) = m$ are fairly straight within a convex joint confidence region of level $1 - \alpha$ for $\mu$ and $\sigma^2$, it is not unreasonable to hope that an approximate confidence region should be possible that would have a true level near $1 - \alpha$, and that would be less conservative than a level $1 - \alpha$ region for $f(\mu, \sigma^2)$ determined by Kanofsky's method. The main result of the paper is the derivation in Section 2 of uniformly most powerful unbiased level $\alpha$ hypothesis tests for linear functions of $\mu$ and $\sigma^2$. The theoretical interest of this section is mainly in the analytic detail of how a well-known theorem applies to this somewhat unusual case. A numerical example follows, illustrating the use of the tables of critical values given in the Appendix. It is not obvious that the confidence procedures defined by these tests in Section 4 define confidence sets that are intervals, an extremely desirable property both for ease of calculation and for practical usefulness of the confidence sets. The proof in Section 5 that the one-sided tests define one-sided confidence intervals provided that $v$, the number of degrees of freedom available for the estimate of $\sigma^2$, is at least two, is the second major result of the paper. In Section 6 it is shown that this property does not obtain when $v = 1$. The analogous result in the two-sided case is proved only for $v = 2$ in Section 7. However it is conjectured that, as in the one-sided case, the desired property also holds for all larger values of $v$. The final section contains a brief discussion of applications of the method to confidence interval estimation for $EX$ when $Y = g(X)$ is normal. It is shown that essentially the only direct application is to the case where $Y = \log(X)$, and that there are no nontrivial direct applications where $EX$ is a function of $\mu$ or $\sigma^2$ alone. The construction of normal tolerance limits involves confidence interval estimation of functions of the form $\mu + \delta\sigma$ (Owen (1958)). However it is shown here that there are essentially no transformations to normality such that $EX$ is a function of $\mu + \delta\sigma$ for some $\delta$. A more complete discussion of approximate applications of the method is left for a subsequent paper.

#### Article information

**Source**

Ann. Math. Statist., Volume 42, Number 4 (1971), 1187-1205.

**Dates**

First available in Project Euclid: 27 April 2007

**Permanent link to this document**

https://projecteuclid.org/euclid.aoms/1177693235

**Digital Object Identifier**

doi:10.1214/aoms/1177693235

**Mathematical Reviews number (MathSciNet)**

MR314189

**Zentralblatt MATH identifier**

0223.62046

**JSTOR**

links.jstor.org

#### Citation

Land, Charles E. Confidence Intervals for Linear Functions of the Normal Mean and Variance. Ann. Math. Statist. 42 (1971), no. 4, 1187--1205. doi:10.1214/aoms/1177693235. https://projecteuclid.org/euclid.aoms/1177693235