Bernoulli

  • Bernoulli
  • Volume 12, Number 3 (2006), 515-533.

Entropy for semi-Markov processes with Borel state spaces: asymptotic equirepartition properties and invariance principles

Valerie Girardin and Nikolaos Limnios

Full-text: Open access

Abstract

The aim of this paper is to define the entropy rate of a semi-Markov process with a Borel state space by extending the strong asymptotic equirepartition property (also called the ergodic theorem of information theory or Shannon-McMillan-Breiman theorem) to this class of non-stationary processes. The mean asymptotic equirepartition property (also called the Shannon-McMillan theorem) is also proven to hold. The relative entropy rate between two semi-Markov processes is defined. All earlier results concerning entropy for semi-Markov processes, jump Markov processes and Markov chains thus appear as special cases. Two invariance principles are established for entropy, one for the central limit theorem and the other for the law of the iterated logarithm.

Article information

Source
Bernoulli, Volume 12, Number 3 (2006), 515-533.

Dates
First available in Project Euclid: 28 June 2006

Permanent link to this document
https://projecteuclid.org/euclid.bj/1151525134

Digital Object Identifier
doi:10.3150/bj/1151525134

Mathematical Reviews number (MathSciNet)
MR2232730

Zentralblatt MATH identifier
1114.60070

Keywords
asymptotic equirepartition property entropy rate functional central limit theorem functional law of the iterated logarithm invariance principle relative entropy semi-Markov processes Shannon-McMillan theorem Shannon-McMillan-Breiman theorem

Citation

Girardin, Valerie; Limnios, Nikolaos. Entropy for semi-Markov processes with Borel state spaces: asymptotic equirepartition properties and invariance principles. Bernoulli 12 (2006), no. 3, 515--533. doi:10.3150/bj/1151525134. https://projecteuclid.org/euclid.bj/1151525134


Export citation

References

  • [1] Bad Dumitrescu, M. (1988) Some informational properties of Markov pure-jump processes. Casopis Pestovani Mat., 113, 429-434.
  • [2] Barbu, V., Boussemart, M. and Limnios, N. (2004) Discrete time semi-Markov model for reliability and survival analysis. Comm. Statist. Theory Methods, 33, 2833-2868.
  • [3] Barron, A. (1985) The strong ergodic theorem for densities: generalized Shannon-McMillan-Breiman theorem. Ann. Probab., 13, 1292-1303.
  • [4] Billingsley, P. (1968) Convergence of Probability Measures. New York: Wiley.
  • [5] Breiman, L. (1958) The individual ergodic theorem of information theory. Ann. Math. Statist., 28, 809-811. Correction (1960): 31, 809-810.
  • [6] Csizár, I. (1996) Maxent, mathematics, and information theory. In K.M. Hanson and R.N. Silver (eds), Maximum Entropy and Bayesian Methods, pp. 35-50. Dordrecht: Kluwer Academic.
  • [7] Dym, H. (1966) A note on limit theorems for the entropy of Markov chains. Ann. Math. Statist., 37, 522-524.
  • [8] Garret, A. (2001) Maximum entropy from the laws of probability. In M. Mohammad-Djafari (ed.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, pp. 3-22. Melville, NY: American Institute of Physics.
  • [9] Girardin, V. (2004) Entropy maximization for Markov and semi-Markov processes. Methodol. Comput. Appl. Probab., 6, 109-127.
  • [10] Girardin, V. (2005) On the different extensions of the ergodic theorem of information theory. In R. Baeza-Yates, J. Glaz, H. Gzyl, J. Hüsler and J.L. Palacios (eds), Recent Advances in Applied Probability, pp. 163-179. New York: Springer-Verlag.
  • [11] Girardin, V. and Limnios, N. (2003) On the entropy of semi-Markov processes. J. Appl. Probab., 40, 1060-1068.
  • [12] Girardin, V. and Limnios, N. (2004) Entropy rate and maximum entropy methods for countable semi- Markov chains. Comm. Statist. Theory and Methods, 33, 609-622.
  • [13] Grendar, M. and Grendar, M. (2001) What is the question MaxEnt answers? A probabilistic interpretation. In M. Mohammad-Djafari (ed.), Bayesian inference and Maximum Entropy Methods in Science and Engineering, pp. 83-93. Melville, NY: American Institute of Physics.
  • [14] Grigorescu, S. and Oprisan, G. (1976) Limit theorems for J-X processes with a general state space. Z. Wahrscheinlichkeitsheorie Verw. Geb., 35, 65-73.
  • [15] Gut, A. (1988) Stopped Random Walks, Limit Theorems and Applications. New York: Springer-Verlag.
  • [16] Herkenrath, U., Iosifescu, M. and Rudolph, A. (2003) Letter to the editor. A note on invariance principles for iterated random functions. J. Appl. Probab., 40, 834-837.
  • [17] Heyde, C.C. and Scott, D.J. (1973) Invariance principles for the law of iterated logarithm for martingales and processes with stationary increments. Ann. Probab., 1, 428-436.
  • [18] Johnson, O. (2004) Information Theory and the Central Limit Theorem. London: Imperial College Press.
  • [19] Kifer, Y. (1986) Ergodic Theory of Random Transformations. Boston: Birkhäuser.
  • [20] Krengel, U. (1967) Entropy of conservative transformations. Z. Wahrscheinlichkeitstheorie Verw. Geb., 7, 161-181.
  • [21] Kullback, S. and Leibler, R.A. (1951) On information and sufficiency. Ann. Math. Statist., 29, 79-86.
  • [22] Limnios, N. and Oprisan, G. (2001a) Semi-Markov Processes and Reliability. Boston: Birkhäuser.
  • [23] Limnios, N. and Oprisan, G. (2001b) The invariance principle of an additive functional of a semi- Markov process. Rev. Roumaine Math. Pures Appl., 44, 75-83.
  • [24] McMillan, M. (1953) The basic theorems of information theory. Ann. Math. Statist., 24, 196-219.
  • [25] Meyn, S.P. and Tweedie, R.L. (1996) Markov Chains and Stochastic Stability. London: Springer- Verlag.
  • [26] O´Neil, R. (1990) The Shannon information on a Markov chain approximately normally distributed. Colloq. Math., 60/61, 569-577.
  • [27] Orey, S. (1985) On the Shannon-Perez-Moy theorem. Contemp. Math., 41, 319-327.
  • [28] Perez, A. (1964) Extensions of Shannon-McMillan´s limit theorem to more general stochastic processes. In Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pp. 545-574. Prague: Czechoslovak Academy of Sciences.
  • [29] Pinsker, M.S. (1964) Information and Information Stability of Random Variables and Processes. San Francisco: Holden-Day.
  • [30] Shannon, C. (1948) A mathematical theory of communication. Bell Syst. Tech. J., 27, 379-423, 623-656.
  • [31] Wen, L. and Weiguo, Y. (1996) An extension of Shannon-McMillan theorem and some limit properties for nonhomogeneous Markov chains. Stochastic Process. Appl., 61, 129-145.