Electronic Journal of Statistics
- Electron. J. Statist.
- Volume 11, Number 1 (2017), 608-639.
Cross-calibration of probabilistic forecasts
When providing probabilistic forecasts for uncertain future events, it is common to strive for calibrated forecasts, that is, the predictive distribution should be compatible with the observed outcomes. Often, there are several competing forecasters of different skill. We extend common notions of calibration where each forecaster is analyzed individually, to stronger notions of cross-calibration where each forecaster is analyzed with respect to the other forecasters. In particular, cross-calibration distinguishes forecasters with respect to increasing information sets. We provide diagnostic tools and statistical tests to assess cross-calibration. The methods are illustrated in simulation examples and applied to probabilistic forecasts for inflation rates by the Bank of England. Computer code and supplementary material (Strähl and Ziegel, 2017a,b) are available online.
Electron. J. Statist., Volume 11, Number 1 (2017), 608-639.
Received: November 2016
First available in Project Euclid: 3 March 2017
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Strähl, Christof; Ziegel, Johanna. Cross-calibration of probabilistic forecasts. Electron. J. Statist. 11 (2017), no. 1, 608--639. doi:10.1214/17-EJS1244. https://projecteuclid.org/euclid.ejs/1488531637
- Further Examples and the Score Regression Approach. We provide a short discussion of the cross-calibration test suggested by Feinberg and Stewart (2008) and give additional examples of diagnostic plots for cross-calibration. We generalize the test suggested by Held et al. (2010) to a test for cross-ideal forecasters. Finally, we discuss a natural approach for testing marginal cross-calibration, which, unfortunately, is useless in practice.
- Computer Code. The zip archive contains all R-code used in the paper and supplementary material.