Open Access
March 2019 An algorithm for removing sensitive information: Application to race-independent recidivism prediction
James E. Johndrow, Kristian Lum
Ann. Appl. Stat. 13(1): 189-220 (March 2019). DOI: 10.1214/18-AOAS1201

Abstract

Predictive modeling is increasingly being employed to assist human decision-makers. One purported advantage of replacing or augmenting human judgment with computer models in high stakes settings—such as sentencing, hiring, policing, college admissions, and parole decisions—is the perceived “neutrality” of computers. It is argued that because computer models do not hold personal prejudice, the predictions they produce will be equally free from prejudice. There is growing recognition that employing algorithms does not remove the potential for bias, and can even amplify it if the training data were generated by a process that is itself biased. In this paper, we provide a probabilistic notion of algorithmic bias. We propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Unlike previous work in this area, our procedure accommodates data on any measurement scale. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, we apply our proposed method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. In the process, we demonstrate that a common approach to creating “race-neutral” models—omitting race as a covariate—still results in racially disparate predictions. We then demonstrate that the application of our proposed method to these data removes racial disparities from predictions with minimal impact on predictive accuracy.

Citation

Download Citation

James E. Johndrow. Kristian Lum. "An algorithm for removing sensitive information: Application to race-independent recidivism prediction." Ann. Appl. Stat. 13 (1) 189 - 220, March 2019. https://doi.org/10.1214/18-AOAS1201

Information

Received: 1 March 2017; Revised: 1 May 2018; Published: March 2019
First available in Project Euclid: 10 April 2019

zbMATH: 07057425
MathSciNet: MR3937426
Digital Object Identifier: 10.1214/18-AOAS1201

Keywords: Algorithmic fairness , criminal justice , neutral predictions , racial bias , recidivism , risk assessment , selection bias

Rights: Copyright © 2019 Institute of Mathematical Statistics

Vol.13 • No. 1 • March 2019
Back to Top