Patrizia Berti, Emanuela Dreassi, Fabrizio Leisen, Luca Pratelli, Pietro Rigo
Statist. Sci. Advance Publication, 1-15, (2023) DOI: 10.1214/23-STS884
KEYWORDS: Bayesian inference, conditional identity in distribution, exchangeability, predictive distribution, sequential predictions, stationarity
Given a sequence of random observations, a Bayesian forecaster aims to predict based on for each . To this end, in principle, she only needs to select a collection , called “strategy” in what follows, where is the marginal distribution of and the nth predictive distribution. Because of the Ionescu–Tulcea theorem, σ can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence X is determined. The strategies concern generalized Pólya urns, random change points, covariates and stationary sequences.