Open Access
September 2015 Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan
Ann. Appl. Stat. 9(3): 1350-1371 (September 2015). DOI: 10.1214/15-AOAS848


We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of ifthen…statements (e.g., if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS$_{2}$ score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS$_{2}$, but more accurate.


Download Citation

Benjamin Letham. Cynthia Rudin. Tyler H. McCormick. David Madigan. "Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model." Ann. Appl. Stat. 9 (3) 1350 - 1371, September 2015.


Received: 1 October 2013; Revised: 1 April 2015; Published: September 2015
First available in Project Euclid: 2 November 2015

zbMATH: 06525989
MathSciNet: MR3418726
Digital Object Identifier: 10.1214/15-AOAS848

Keywords: Bayesian analysis , ‎classification‎ , interpretability

Rights: Copyright © 2015 Institute of Mathematical Statistics

Vol.9 • No. 3 • September 2015
Back to Top