Open Access
November 2023 Defining Replicability of Prediction Rules
Giovanni Parmigiani
Author Affiliations +
Statist. Sci. 38(4): 543-556 (November 2023). DOI: 10.1214/23-STS891

Abstract

In this article, I propose an approach for defining replicability for prediction rules. Motivated by a recent report by the U.S.A. National Academy of Sciences, I start from the perspective that replicability is obtaining consistent results across studies suitable to address the same prediction question, each of which has obtained its own data. I then discuss concept and issues in defining key elements of this statement. I focus specifically on the meaning of “consistent results” in typical utilization contexts, and propose a multi-agent framework for defining replicability, in which agents are neither allied nor adversaries. I recover some of the prevalent practical approaches as special cases. I hope to provide guidance for a more systematic assessment of replicability in machine learning.

Funding Statement

Work supported by NSF Grant DMS-2113707.

Acknowledgments

I presented a preliminary version of Section 3 at a 2022 symposium on “Statistical methods and models for complex data,” held in Padova. I am grateful to my discussants Marco Alfò and Gianmarco Altoè for very thoughtful comments, and to Aldo Solari for encouraging me to think about falsifiability in the context of replication. Mike Daniels, Lorenzo Trippa, Michael Lavine and two insightful reviewers helped with comments on earlier drafts.

Citation

Download Citation

Giovanni Parmigiani. "Defining Replicability of Prediction Rules." Statist. Sci. 38 (4) 543 - 556, November 2023. https://doi.org/10.1214/23-STS891

Information

Published: November 2023
First available in Project Euclid: 6 November 2023

Digital Object Identifier: 10.1214/23-STS891

Keywords: decision theory , prediction , replicability

Rights: Copyright © 2023 Institute of Mathematical Statistics

Vol.38 • No. 4 • November 2023
Back to Top