Electronic Journal of Statistics

Extensive scoring rules

Matthew Parry

Full-text: Open access

Abstract

Scoring rules evaluate the performance of probabilistic forecasts. A scoring rule is said to be local if it assigns a score based on the observed outcome and on outcomes that are in some sense “close” to the observed outcome. All scoring rules can be derived from a concave entropy functional and the property of locality follows when the entropy is 1-homogeneous (up to an additive constant). Consequently, except for the log score, a local scoring rule has the remarkable property that it is 0-homogeneous; in other words, it assigns a score that is independent of the normalization of the quoted probability distribution. In many statistical applications, it is not plausible to treat observed outcomes as independent, e.g. time series data or multicomponent measurements. We show that local scoring rules can be easily extended to multidimensional outcome spaces. We also introduce the notion of an extensive scoring rule, i.e. a scoring rule that ensures the score of independent outcomes is a sum of independent scores. We construct local scoring rules that are extensive and show that a scoring rule is a extensive if and only if it is derived from an extensive entropy.

Article information

Source
Electron. J. Statist., Volume 10, Number 1 (2016), 1098-1108.

Dates
Received: February 2016
First available in Project Euclid: 14 April 2016

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1460640637

Digital Object Identifier
doi:10.1214/16-EJS1132

Mathematical Reviews number (MathSciNet)
MR3486426

Zentralblatt MATH identifier
1381.62258

Subjects
Primary: 62C99: None of the above, but in this section
Secondary: 62A99: None of the above, but in this section

Keywords
Additivity homogeneity concavity entropy sequential score matching

Citation

Parry, Matthew. Extensive scoring rules. Electron. J. Statist. 10 (2016), no. 1, 1098--1108. doi:10.1214/16-EJS1132. https://projecteuclid.org/euclid.ejs/1460640637


Export citation

References

  • Almeida, M. P. and Gidas, B. (1993). A Variational Method for Estimating the Parameters of MRF from Complete or Incomplete Data., The Annals of Applied Probability 3 103–136.
  • Boero, G., Smith, J. and Wallis, K. F. (2011). Scoring rules and survey density forecasts., International Journal of Forecasting 27 379–393.
  • Bröcker, J. and Smith, L. A. (2007). Scoring Probabilistic Forecasts: The Importance of Being Proper., Weather and Forecasting 22 (2) 382–388.
  • Constantinou, A. and Fenton, N. (2012). Solving the problem of inadequate scoring rules for assessing probabilistic football forecast models., Journal of Quantitative Analysis in Sports 8 1.
  • Dawid, A. P. (1984). Statistical Theory: The Prequential Approach., Journal of the Royal Statistical Society, Ser. A, 147 278–292.
  • Dawid, A. P. and Lauritzen, S. L. (2005). The Geometry of Decision Theory. In, Proceedings of the Second International Symposium on Information Geometry and its Applications 22–28. University of Tokyo.
  • Dawid, A. P., Lauritzen, S. and Parry, M. (2012). Proper local scoring rules on discrete sample spaces., Annals of Statistics 40 593–608.
  • Ehm, W. and Gneiting, T. (2012). Local proper scoring rules of order two., Annals of Statistics 40 609–637.
  • Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation., Journal of the American Statistical Association 102 359–378.
  • Gneiting, T. and Ranjan, R. (2011). Comparing density forecasts using threshold- and quantile-weighted scoring rules., Journal of Business & Economic Statistics 29 411–422.
  • Gneiting, T., Raftery, A. E., Westveld, A. and Goldman, T. (2005). Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation., Monthly Weather Review 133 1098–1118.
  • Hendrickson, A. D. and Buehler, R. J. (1971). Proper scores for probability forecasters., Ann. Math. Statist. 42 1916–1921.
  • Hyvärinen, A. (2005). Estimation of non-normalized statistical models by score matching., Journal of Machine Learning 6 695–709.
  • Hyvärinen, A. (2007). Some extensions of score matching., Computational Statistics and Data Analysis 51 2499–2512.
  • Jolliffe, I. T. and Stephenson, D. B. (2003)., Forecast Verification: A Practitioner’s Guide in Atmospheric Science. Wiley, Chichester, U.K.
  • Mackay, D. J. C. (2003)., Information theory, inference, and learning algorithms. Cambridge University Press.
  • McCarthy, J. (1956). Measures of the value of information., Proc. Nat. Acad. Sci. 42 654–655.
  • Parry, M. (2013). Multidimensional local scoring rules. In, Proceedings of the 59th ISI World Statistics Congress 1453–1458.
  • Parry, M., Dawid, A. P. and Lauritzen, S. (2012). Proper local scoring rules., Annals of Statistics 40 561–592.
  • Suckling, E. B. and Smith, L. A. (2013). An evaluation of decadal probability forecasts from state-of-the-art climate models., Journal of Climate 26 9334–9347.
  • Yang, E., Ravikumar, P., Allen, G. I., Baker, Y., Wan, Y. W. and Liu, Z. (2014). A General Framework for Mixed Graphical Models., arXiv:1411.0288.