Abstract
For decades National Football League (NFL) coaches’ observed fourth down decisions have been largely inconsistent with prescriptions based on statistical models. In this paper we develop a framework to explain this discrepancy using an inverse optimization approach. We model the fourth down decision and the subsequent sequence of plays in a game as a Markov decision process (MDP), the dynamics of which we estimate from NFL play-by-play data from the 2014 through 2022 seasons. We assume that coaches’ observed decisions are optimal but that the risk preferences governing their decisions are unknown. This yields an inverse decision problem for which the optimality criterion, or risk measure, of the MDP is the estimand. Using the quantile function to parameterize risk, we estimate which quantile-optimal policy yields the coaches’ observed decisions as minimally suboptimal. In general, we find that coaches’ fourth-down behavior is consistent with optimizing low quantiles of the next-state value distribution, which corresponds to conservative risk preferences. We also find that coaches exhibit higher risk tolerances when making decisions in the opponent’s half of the field, as opposed to their own half, and that league average fourth down risk tolerances have increased over time.
Acknowledgments
The authors would like to thank the Associate Editor and anonymous referees for their constructive comments that improved the quality of this paper.
Citation
Nathan Sandholtz. Lucas Wu. Martin Puterman. Timothy C. Y. Chan. "Learning risk preferences in Markov decision processes: An application to the fourth down decision in the national football league." Ann. Appl. Stat. 18 (4) 3205 - 3228, December 2024. https://doi.org/10.1214/24-AOAS1933
Information