Abstract
Using the concept of principal stratification from the causal inference literature, we introduce a new notion of fairness, called principal fairness, for human and algorithmic decision-making. Principal fairness states that one should not discriminate among individuals who would be similarly affected by the decision. Unlike the existing statistical definitions of fairness, principal fairness explicitly accounts for the fact that individuals can be impacted by the decision. This causal fairness formulation also enables online or post-hoc fairness evaluation and policy learning. We also explain how principal fairness relates to the existing causality-based fairness criteria. In contrast to the counterfactual fairness criteria, for example, principal fairness considers the effects of decision in question rather than those of protected attributes of interest. Finally, we discuss how to conduct empirical evaluation and policy learning under the proposed principal fairness criterion.
Funding Statement
We acknowledge the partial support by the National Science Foundation (SES–2051196) and Cisco Systems, Inc. (CG# 2370386).
Acknowledgments
We thank Elias Bareinboim, Hao Chen, Shizhe Chen, Christina Davis, Cynthia Dwork, Peng Ding, Robin Gong, Jim Greiner, Sharad Goel, Nathan Kallus, Gary King, Jamie Robins and Pragya Sur for comments and discussions. We also thank anonymous reviewers of the Alexander and Diviya Magaro Peer Pre-Review Program at IQSS for valuable feedback.
Citation
Kosuke Imai. Zhichao Jiang. "Principal Fairness for Human and Algorithmic Decision-Making." Statist. Sci. 38 (2) 317 - 328, May 2023. https://doi.org/10.1214/22-STS872
Information