Open Access
February 2017 Forecaster’s Dilemma: Extreme Events and Forecast Evaluation
Sebastian Lerch, Thordis L. Thorarinsdottir, Francesco Ravazzolo, Tilmann Gneiting
Statist. Sci. 32(1): 106-127 (February 2017). DOI: 10.1214/16-STS588

Abstract

In public discussions of the quality of forecasts, attention typically focuses on the predictive performance in cases of extreme events. However, the restriction of conventional forecast evaluation methods to subsets of extreme observations has unexpected and undesired effects, and is bound to discredit skillful forecasts when the signal-to-noise ratio in the data generating process is low. Conditioning on outcomes is incompatible with the theoretical assumptions of established forecast evaluation methods, thereby confronting forecasters with what we refer to as the forecaster’s dilemma. For probabilistic forecasts, proper weighted scoring rules have been proposed as decision-theoretically justifiable alternatives for forecast evaluation with an emphasis on extreme events. Using theoretical arguments, simulation experiments and a real data study on probabilistic forecasts of U.S. inflation and gross domestic product (GDP) growth, we illustrate and discuss the forecaster’s dilemma along with potential remedies.

Citation

Download Citation

Sebastian Lerch. Thordis L. Thorarinsdottir. Francesco Ravazzolo. Tilmann Gneiting. "Forecaster’s Dilemma: Extreme Events and Forecast Evaluation." Statist. Sci. 32 (1) 106 - 127, February 2017. https://doi.org/10.1214/16-STS588

Information

Published: February 2017
First available in Project Euclid: 6 April 2017

zbMATH: 06946266
MathSciNet: MR3634309
Digital Object Identifier: 10.1214/16-STS588

Keywords: Diebold–Mariano test , hindsight bias , likelihood ratio test , Neyman–Pearson lemma , predictive performance , probabilistic forecast , proper weighted scoring rule , rare and extreme events

Rights: Copyright © 2017 Institute of Mathematical Statistics

Vol.32 • No. 1 • February 2017
Back to Top