Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact firstname.lastname@example.org with any questions.
Multilinear models are models in which the expectation of a multiway array is the sum of products of parameters, where each parameter is associated with only one of the ways. In spectroscopy, multilinear models permit mathematical decompositions of data sets when chemical decomposition of specimens is difficult or impossible. This paper presents a unified description of the models in an array notation. The spectroscopic context shows how to interpret one initialization of the nonlinear least-squares fits of these models. Several examples show that these models can be applied successfully.
The Federal Government of the United States has collected and published an increasing volume of statistics from the founding of the republic, but its contributions to statistical theory and method did not really begin until 1933. Before then, the bulk of Federal statistics was done by tabulation and compilation, and methods were largely intuitive. The Roosevelt New Deal and the Committee on Government Statistics and Information Services (COGSIS) made probability sampling and statistical analysis a significant part of Government planning and operations. By early in World War II, Federal statisticians had become leaders rather than just followers in statistical theory and methods. This article provides a summary of how this happened and especially of the subsequent development of survey sampling from finite populations. Attention is then turned to the development of statistical analysis in the Federal Government, a more diverse subject, which is both related to probability sampling in significant ways and very interesting because it is probably still in an early stage of development. This paper also provides commentary on some recent developments in the Federal statistical system in general during the period 1977 to 1992.
The fiducial argument arose from Fisher's desire to create an inferential alternative to inverse methods. Fisher discovered such an alternative in 1930, when he realized that pivotal quantities permit the derivation of probability statements concerning an unknown parameter independent of any assumption concerning its a priori distribution. The original fiducial argument was virtually indistinguishable from the confidence approach of Neyman, although Fisher thought its application should be restricted in ways reflecting his view of inductive reasoning, thereby blending an inferential and a behaviorist viewpoint. After Fisher attempted to extend the fiducial argument to the multiparameter setting, this conflict surfaced, and he then abandoned the unconditional sampling approach of his earlier papers for the conditional approach of his later work. Initially unable to justify his intuition about the passage from a probability assertion about a statistic (conditional on a parameter) to a probability assertion about a parameter (conditional on a statistic), Fisher thought in 1956 that he had finally discovered the way out of this enigma with his concept of recognizable subset. But the crucial argument for the relevance of this concept was founded on yet another intuition--one which, now clearly stated, was later demonstrated to be false by Buehler and Feddersen in 1963.