Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
This paper gives a brief overview of the nonparametric techniques that are useful for financial econometric problems. The problems include estimation and inference for instantaneous returns and volatility functions of time-homogeneous and time-dependent diffusion processes, and estimation of transition densities and state price densities. We first briefly describe the problems and then outline the main techniques and main results. Some useful probabilistic aspects of diffusion processes are also briefly summarized to facilitate our presentation and applications.
These comments concentrate on two issues arising from Fan’s overview. The first concerns the importance of finite sample estimation bias relative to the specification and discretization biases that are emphasized in Fan’s discussion. Past research and simulations given here both reveal that finite sample effects can be more important than the other two effects when judged from either statistical or economic viewpoints. Second, we draw attention to a very different nonparametric technique that is based on computing an empirical version of the quadratic variation process. This technique is not mentioned by Fan but has many advantages and has accordingly attracted much recent attention in financial econometrics and empirical applications.
The optimal hypothesis tests for the binomial distribution and some other discrete distributions are uniformly most powerful (UMP) one-tailed and UMP unbiased (UMPU) two-tailed randomized tests. Conventional confidence intervals are not dual to randomized tests and perform badly on discrete data at small and moderate sample sizes. We introduce a new confidence interval notion, called fuzzy confidence intervals, that is dual to and inherits the exactness and optimality of UMP and UMPU tests. We also introduce a new P-value notion, called fuzzy P-values or abstract randomized P-values, that also inherits the same exactness and optimality.
We discuss the implementation, development and performance of methods of stochastic computation in Gaussian graphical models. We view these methods from the perspective of high-dimensional model search, with a particular interest in the scalability with dimension of Markov chain Monte Carlo (MCMC) and other stochastic search methods. After reviewing the structure and context of undirected Gaussian graphical models and model uncertainty (covariance selection), we discuss prior specifications, including new priors over models, and then explore a number of examples using various methods of stochastic computation. Traditional MCMC methods are the point of departure for this experimentation; we then develop alternative stochastic search ideas and contrast this new approach with MCMC. Our examples range from low (12–20) to moderate (150) dimension, and combine simple synthetic examples with data analysis from gene expression studies. We conclude with comments about the need and potential for new computational methods in far higher dimensions, including constructive approaches to Gaussian graphical modeling and computation.
In 1922 R. A. Fisher introduced the modern regression model, synthesizing the regression theory of Pearson and Yule and the least squares theory of Gauss. The innovation was based on Fisher’s realization that the distribution associated with the regression coefficient was unaffected by the distribution of X. Subsequently Fisher interpreted the fixed X assumption in terms of his notion of ancillarity. This paper considers these developments against the background of the development of statistical theory in the early twentieth century.
John Anthony Hartigan was born on July 2, 1937 in Sydney, Australia. He attended the University of Sydney, earning a B.Sc. degree in mathematics in 1959 and an M.Sc. degree in mathematics the following year. In 1960 John moved to Princeton where he studied for his Ph.D. in statistics under the guidance of John Tukey and Frank Anscombe. He completed his Ph.D. in 1962, and worked as an Instructor at Princeton in 1962–1963, and as a visiting lecturer at the Cambridge Statistical Laboratory in 1963–1964. In 1964 he joined the faculty at Princeton. He moved to Yale as Associate Professor with tenure in 1969, became a Professor in 1972 and, in 1983, became Eugene Higgins Professor of Statistics at Yale—a position previously held by Jimmie Savage. He served as Chairman of the Statistics Department at Yale from 1973 to 1975 and again from 1988 to 1994. John was instrumental in the establishment of the Social Sciences Statistical Laboratory at Yale and served as its Director from 1985 to 1989 and again from 1991 to 1993. He served as Chairman of the National Research Council Committee on the General Aptitude Test Battery from 1987 to 1990. John’s research interests cover the foundations of probability and statistics, classification, clustering, Bayes methods and statistical computing. He has published over 80 journal papers and two books: Clustering Algorithms in 1975 and Bayes Theory in 1983. He is a Fellow of the American Statistical Association and of the Institute of Mathematical Statistics. He served as President of the Classification Society from 1978 to 1980 and as Editor of Statistical Science from 1984 to 1987. John married Pamela Harvey in 1960. They have three daughters and three grandchildren.