Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
In this paper, we show that the conditional frequentist method of testing a precise hypothesis can be made virtually equivalent to Bayesian testing. The conditioning strategy proposed by Berger, Brown and Wolpert in 1994, for the simple versus simple case, is generalized to testing a precise null hypothesis versus a composite alternative hypothesis. Using this strategy, both the conditional frequentist and the Bayesian will report the same error probabilities upon rejecting or accepting. This is of considerable interest because it is often perceived that Bayesian and frequentist testing are incompatible in this situation. That they are compatible, when conditional frequentist testing is allowed, is a strong indication that the "wrong" frequentist tests are currently being used for postexperimental assessment of accuracy. The new unified testing procedure is discussed and illustrated in several common testing situations.
When R. A. Fisher studied statistics as a student at Cambridge, the typical way to think about statistical inference was in terms of the method of inverse probability and Bayes's theorem. While others groped for alternatives with systematic structure and desirable alternatives, it remained for Fisher to invent the notion of likelihood and to explore its properties. These two papers trace the emergence of Fisher's thinking on likelihood over a 10-year period.
In 1922 R. A. Fisher introduced the method of maximum likelihood. He first presented the numerical procedure in 1912. This paper considers Fisher's changing justifications for the method, the concepts he developed around it (including likelihood, sufficiency, efficiency and information) and the approaches he discarded (including inverse probability).
The method of maximum likelihood was introduced by R. A. Fisher in 1912, but not until 1922 under that name. This paper seeks to elucidate what Fisher understood by the phrase "inverse probability," which he used in various ways before defining "likelihood" in 1921 to clarify his meaning.
Some analyses of longitudinal blood pressure data have focused on the question of whether a current value of blood pressure is predictive of subsequent rate of change. A positive correlation between blood pressure values at the beginning of a longitudinal study and rate of change over the course of the study has been found in studies of adults. Negative correlation, however, has been found in a study of children. These studies, either implicitly or explicitly, rely on linear growth curve models in which subjects' blood pressure observations are assumed to follow simple linear regression models with slopes and intercepts varying among subjects, but with the slopes constant over time.
Our analysis of a longitudinal data set of 2,203 measurements of systolic blood pressure from 216 children also provided a negative estimate of the correlation. However, smoothed plots of cross products of residuals suggested that an alternative random effects model, in which rate of change of systolic blood pressure is not treated as constant over time, might better fit the data. It is possible that the negative estimates of the correlation found in children's blood pressure data are an artifact of assuming a constant rate of change when the data actually follow the alternative model. It is shown that the expected result of fitting the linear growth curve model to data that follow the alternative model is an apparent negative correlation between slope and intercept. In the data, the observed estimates of the parameters of the linear growth curve model are consistent with the observed estimates of the parameters of the alternative model.
This article reviews key contributions in the area of statistics as applied to the use of molecular marker technology and quantitative genetics in the search for genes affecting quantitative traits responsible for specific diseases and economically important agronomic traits. Since an exhaustive literature review is not possible, the limited scope of this work is to encourage further statistical work in this vast field by first reviewing human and domestic species literature, and then concentrating on the statistical developments for experimental breeding populations. Substantial gains have been made over the years by both plant and animal breeders toward a long-term goal of locating genes affecting quantitative traits (quantitative trait loci, QTLs) for the eventual characterization and manipulation of these genes in order to develop improved agronomically important traits. Our main concern is that the care and expense that are required in generating both genetic marker data and quantitative trait data should be accompanied by equal care in the statistical analysis of the data. Through an example using an $F_2$ male genetic map of mouse chromosome 10, and quantitative trait values measured on weight gain, we implement much of the reviewed methodology for the purpose of detecting or locating a QTL having an effect on weight gain.