## The Annals of Mathematical Statistics

Justus Seely

#### Abstract

Since the notion of completeness for a family of distributions was introduced by Lehmann and Scheffe [7], a problem of interest has been to determine conditions under which a complete sufficient statistic exists for a family of multivariate normal distributions. One approach to this problem, first formulated for a completely random model in some work by Graybill and Hultquist [2] and extended to a mixed linear model by Basson [1], has a basic assumption of commutativity for certain pairs of matrices. In the present paper some of the commutativity conditions and an associated eigenvalue condition assumed in the theorems on completeness in both [1] and [2] are replaced by the weaker requirement of a quadratic subspace. These subspaces, i.e., quadratic subspaces, are introduced and briefly investigated in Section 2 and are found to possess some rather interesting mathematical properties. The existence of $\bar\mathscr{A}$-best estimators (e.g., [12]) is also examined for several situations; and it is found that the usual estimators in the weighting factors for the recovery of interblock information in a balanced incomplete block design (treatments fixed and blocks random) have an optimal property when the number of treatments is equal to the number of blocks. Throughout the paper $(\mathscr{A}, (-,-))$ denotes the FDHS (finite-dimensional Hilbert space or finite-dimensional inner product space) of $n \times n$ real symmetric matrices with the trace inner product. The notation $Y \sim N_n(X\beta, \sum^m_{i=1} \nu_iV_i)$ means that $Y$ is an $n \times 1$ random vector distributed according to a multivariate normal distribution with expectation $X\beta$ and covariance matrix $\sum^m_{i=1} \nu_iV_i$; and for such a random vector the following is assumed: (a) $X$ is a known $n \times p$ matrix and $\beta$ is an unknown vector of parameters ranging over $\Omega_1 = R^p$. (b) Each $V_i(i = 1,2,\cdots, m)$ is a known $n \times n$ real symmetric matrix, $V_m = I$, and $\nu = (\nu_1,\cdots, \nu_m)'$ is a vector of unknown parameters ranging over a subset $\Omega_2$ of $R^m$. (c) The set $\Omega_2$ contains a non-void open set in $R^m$ and $\sum^m_{i=1} \nu_iV_i$ is a positive definite matrix for each $\nu \in \Omega_2$. (d) The parameters $\nu$ and $\beta$ are functionally independent so that the entire parameter space is $\Omega = \Omega_1 \times \Omega_2$. For the special case when $X = 0$ the notation $Y \sim N_n(0, \sum^m_{i=1} \nu_iV_i)$ is used and for this situation the parameter space $\Omega$ reduces to $\Omega_2$. The notation and terminology in the following sections is generally consistent with the usage in [12]. The adjoint of a linear operator $\mathbf{T}$ is denoted by $\mathbf{T}^\ast$ and the transpose of a matrix $A$ is denoted by A'. Additionally, the unique Moore-Penrose generalised inverse of a matrix $A$ is denoted by $A^+$, and as in [12], only real finite-dimensional linear spaces are considered.

#### Article information

Source
Ann. Math. Statist., Volume 42, Number 2 (1971), 710-721.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177693420

Digital Object Identifier
doi:10.1214/aoms/1177693420

Mathematical Reviews number (MathSciNet)
MR292215

Zentralblatt MATH identifier
0249.62067

JSTOR