The Annals of Applied Statistics

The role of the information set for forecasting—with applications to risk management

Hajo Holzmann and Matthias Eulert

Full-text: Open access

Abstract

Predictions are issued on the basis of certain information. If the forecasting mechanisms are correctly specified, a larger amount of available information should lead to better forecasts. For point forecasts, we show how the effect of increasing the information set can be quantified by using strictly consistent scoring functions, where it results in smaller average scores. Further, we show that the classical Diebold–Mariano test, based on strictly consistent scoring functions and asymptotically ideal forecasts, is a consistent test for the effect of an increase in a sequence of information sets on $h$-step point forecasts. For the value at risk (VaR), we show that the average score, which corresponds to the average quantile risk, directly relates to the expected shortfall. Thus, increasing the information set will result in VaR forecasts which lead on average to smaller expected shortfalls. We illustrate our results in simulations and applications to stock returns for unconditional versus conditional risk management as well as univariate modeling of portfolio returns versus multivariate modeling of individual risk factors. The role of the information set for evaluating probabilistic forecasts by using strictly proper scoring rules is also discussed.

Article information

Source
Ann. Appl. Stat., Volume 8, Number 1 (2014), 595-621.

Dates
First available in Project Euclid: 8 April 2014

Permanent link to this document
https://projecteuclid.org/euclid.aoas/1396966300

Digital Object Identifier
doi:10.1214/13-AOAS709

Mathematical Reviews number (MathSciNet)
MR3192004

Zentralblatt MATH identifier
06302249

Keywords
Forecast information set scoring function scoring rule value at risk

Citation

Holzmann, Hajo; Eulert, Matthias. The role of the information set for forecasting—with applications to risk management. Ann. Appl. Stat. 8 (2014), no. 1, 595--621. doi:10.1214/13-AOAS709. https://projecteuclid.org/euclid.aoas/1396966300


Export citation

References

  • Acerbi, C. and Tasche, D. (2002). On the coherence of expected shortfall. J. Banking Finance 26 1487–1503.
  • Bao, Y., Lee, T.-H. and Saltoğlu, B. (2006). Evaluating predictive performance of value-at-risk models in emerging markets: A reality check. J. Forecast. 25 101–128.
  • Berkowitz, J., Christoffersen, P. F. and Pelletier, D. (2011). Evaluating value-at-risk models with desk-level data. Management Science 57 2213–2227.
  • Bröcker, J. (2009). Reliability, sufficiency, and the decomposition of proper scores. Q. J. Roy. Meteor. Soc. 135 1512–1519.
  • Christoffersen, P. F. (1998). Evaluating interval forecasts. Internat. Econom. Rev. 39 841–862.
  • Christoffersen, P. F. (2009). Value-at-risk models. In Handbook of Financial Time Series (T. Mikosch, J. P. Kreiß, R. A. Davis and T. G. Andersen, eds.) 753–766. Springer, Berlin.
  • DeGroot, M. H. and Fienberg, S. E. (1983). The comparison and evaluation of forecasters. J. Roy. Stat. Soc. Ser. D (The Statistician) 32 12–22.
  • Diebold, F. X. (2012). Comparing predictive accuracy, twenty years later: A personal perspective on the use and abuse of Diebold–Mariano tests. Working Paper No. 18391, NBER.
  • Diebold, F. X. and Mariano, R. S. (1995). Comparing predictive accuracy. J. Bus. Econom. Statist. 13 253–263.
  • Durrett, R. (2005). Probability: Theory and Examples, 3rd ed. Thomson Brooks/Cole, Belmont, CA.
  • Engle, R. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models. J. Bus. Econom. Statist. 20 339–350.
  • Escanciano, J. C. and Olmo, J. (2011). Robust backtesting tests for value-at-risk models. J. Financ. Economet. 9 132–161.
  • Giacomini, R. and White, H. (2006). Tests of conditional predictive ability. Econometrica 74 1545–1578.
  • Gneiting, T. (2011). Making and evaluating point forecasts. J. Amer. Statist. Assoc. 106 746–762.
  • Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. J. R. Stat. Soc. Ser. B Stat. Methodol. 69 243–268.
  • Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation. J. Amer. Statist. Assoc. 102 359–378.
  • Gneiting, T. and Ranjan, R. (2011). Comparing density forecasts using threshold- and quantile-weighted scoring rules. J. Bus. Econom. Statist. 29 411–422.
  • Heinrich, C. (2014). The mode functional is not elicitable. Biometrika. To appear.
  • Jorion, P. (2006). Value-at-Risk: The New Benchmark for Managing Financial Risk. McGraw Hill, New York.
  • Klenke, A. (2008). Probability Theory: A Comprehensive Course. Springer London, London.
  • McNeil, A. J., Frey, R. and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton Univ. Press, Princeton, NJ.
  • Mitchell, J. and Wallis, K. F. (2011). Evaluating density forecasts: Forecast combinations, model mixtures, calibration and sharpness. J. Appl. Econometrics 26 1023–1040.
  • Newey, W. K. and West, K. D. (1987). A simple, positive semidefinite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55 703–708.
  • Patton, A. J. and Timmermann, A. (2012). Forecast rationality tests based on multi-horizon bounds. J. Bus. Econom. Statist. 30 1–17.
  • Rockafellar, R. T. and Uryasev, S. (2000). Optimization of conditional value-at-risk. J. Risk 2 21–41.
  • Tsyplakov, A. (2011). Evaluating density forecasts: A comment. Paper No. 31233, MPRA. Available at http://mpra.ub.uni-muenchen.de/31233.
  • van der Vaart, A. W. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer, New York.