Abstract
The asymptotic classification risk for nearest neighbor procedures is well understood in the case of i.i.d. training sequences. In this article, we generalize these results to a class of dependent models including hidden Markov models. In the case where the observed patterns have Lebesgue densities, the asymptotic risk takes the same expression as in the i.i.d. case. For discrete distributions, we show that the asymptotic risk depends on the rule used for breaking ties of equal distances.
Citation
M. Holst. A. Irle. "Nearest neighbor classification with dependent training sequences." Ann. Statist. 29 (5) 1424 - 1442, October 2001. https://doi.org/10.1214/aos/1013203460
Information