The Annals of Applied Probability

Estimating the state of a noisy continuous time Markov chain when dynamic sampling is feasible

David Assaf

Full-text: Open access

Abstract

A continuous time Markov chain is observed with Gaussian white noise added to it. To the well-known problem of continuously estimating the current state of the chain, we introduce the additional option of continuously varying the sampling rates, as long as some restriction (or cost) on the average sampling rate is satisfied. The optimal solution to this "dynamic sampling" problem is presented and analyzed in closed form for the two-state symmetric case. It is shown that the resulting dynamic sampling procedure has a much lower asymptotic average error rate compared to the one obtained when sampling at a constant rate. Alternatively, the dynamic sampling procedure can provide the same error rate using a much lower average sampling rate. The relative efficiency of the dynamic sampling procedure may in fact tend to infinity in some extreme cases.

Article information

Source
Ann. Appl. Probab., Volume 7, Number 3 (1997), 822-836.

Dates
First available in Project Euclid: 16 October 2002

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1034801256

Digital Object Identifier
doi:10.1214/aoap/1034801256

Mathematical Reviews number (MathSciNet)
MR1459273

Zentralblatt MATH identifier
0890.62072

Subjects
Primary: 62M20: Prediction [See also 60G25]; filtering [See also 60G35, 93E10, 93E11] 93E20: Optimal stochastic control 60J27: Continuous-time Markov processes on discrete state spaces 60J60: Diffusion processes [See also 58J65]

Keywords
Filtering dynamic sampling Gaussian white noise diffusion process optimal control average error rate

Citation

Assaf, David. Estimating the state of a noisy continuous time Markov chain when dynamic sampling is feasible. Ann. Appl. Probab. 7 (1997), no. 3, 822--836. doi:10.1214/aoap/1034801256. https://projecteuclid.org/euclid.aoap/1034801256


Export citation

References

  • Assaf, D. (1988). A dy namic sampling approach for detecting a change in distribution. Ann. Statist. 16 236-253.
  • Assaf, D. and Sharlin-Bilitzky, A. (1994). Dy namic search for a moving target. J. Appl. Probab. 31 438-457.
  • Bather, J. A. (1976). A control chart model and a generalized stopping problem for Brownian motion. Math. Oper. Res. 1 209-224.
  • Fleming, W. H. and Rishel, R. W. (1975). Deterministic and Stochastic Optimal Control. Springer, New York.
  • Gihman, I. I. and Skorohod, A. V. (1979). Controlled Stochastic Process. Springer, New York.
  • Karlin, S. and Tay lor, H. M. (1981). A Second Course in Stochastic Processes. Academic Press, New York.
  • Khasminskii, K. Z. and Lazareva, B. V. (1992). On some filteration procedure for jump Markov process observed in white Gaussian noise. Ann. Statist. 20 2153-2160.
  • Liptser, R. S. and Shiry ayev, A. N. (1977). Statistics of Random Processes. Springer, New York.
  • Ross, S. M. (1982). Introduction to Stochastic Dy namic Programming. Academic Press, New York.
  • Ross, S. M. (1983). Stochastic Processes. Wiley, New York.
  • Weber, R. R. (1986). Optimal search for a randomly moving object. J. Appl. Probab. 23 708-717.
  • Yao, Y-C. (1985). Estimation of noisy telegraph processes: nonlinear filtering versus nonlinear smoothing. IEEE Trans. Inform. It-31 444-446.