- Bayesian Anal.
- Volume 11, Number 1 (2016), 247-263.
Recursive Learning for Sparse Markov Models
Markov chains of higher order are popular models for a wide variety of applications in natural language and DNA sequence processing. However, since the number of parameters grows exponentially with the order of a Markov chain, several alternative model classes have been proposed that allow for stability and higher rate of data compression. The common notion to these models is that they cluster the possible sample paths used to predict the next state into invariance classes with identical conditional distributions assigned to the same class. The models vary in particular with respect to constraints imposed on legitime partitions of the sample paths. Here we consider the class of sparse Markov chains for which the partition is left unconstrained a priori. A recursive computation scheme based on Delaunay triangulation of the parameter space is introduced to enable fast approximation of the posterior mode partition. Comparisons with stochastic optimization, -means and nearest neighbor algorithms show that our approach is both considerably faster and leads on average to a more accurate estimate of the underlying partition. We show additionally that the criterion used in the recursive steps for comparison of triangulation cell contents leads to consistent estimation of the local structure in the sparse Markov model.
Bayesian Anal., Volume 11, Number 1 (2016), 247-263.
First available in Project Euclid: 15 April 2015
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Xiong, Jie; Jääskinen, Väinö; Corander, Jukka. Recursive Learning for Sparse Markov Models. Bayesian Anal. 11 (2016), no. 1, 247--263. doi:10.1214/15-BA949. https://projecteuclid.org/euclid.ba/1429105670
- Appendix of article by Xiong, Jääskinen, and Corander.