Open Access
2021 SIRUS: Stable and Interpretable RUle Set for classification
Clément Bénard, Gérard Biau, Sébastien Da Veiga, Erwan Scornet
Electron. J. Statist. 15(1): 427-505 (2021). DOI: 10.1214/20-EJS1792

Abstract

State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as “black-boxes” because of the high number and complexity of operations involved in their prediction mechanism. This lack of interpretability is a strong limitation for applications involving critical decisions, typically the analysis of production processes in the manufacturing industry. In such critical contexts, models have to be interpretable, i.e., simple, stable, and predictive. To address this issue, we design SIRUS (Stable and Interpretable RUle Set), a new classification algorithm based on random forests, which takes the form of a short list of rules. While simple models are usually unstable with respect to data perturbation, SIRUS achieves a remarkable stability improvement over cutting-edge methods. Furthermore, SIRUS inherits a predictive accuracy close to random forests, combined with the simplicity of decision trees. These properties are assessed both from a theoretical and empirical point of view, through extensive numerical experiments based on our $\mathtt{R/C}\mathtt{++}$ software implementation $\mathtt{sirus}$ available from $\mathtt{CRAN}$.

Citation

Download Citation

Clément Bénard. Gérard Biau. Sébastien Da Veiga. Erwan Scornet. "SIRUS: Stable and Interpretable RUle Set for classification." Electron. J. Statist. 15 (1) 427 - 505, 2021. https://doi.org/10.1214/20-EJS1792

Information

Received: 1 September 2020; Published: 2021
First available in Project Euclid: 6 January 2021

Digital Object Identifier: 10.1214/20-EJS1792

Subjects:
Primary: 62G05 , 62G35
Secondary: 62G20 , 62H30

Keywords: ‎classification‎ , interpretability , random forests , rules , stability

Vol.15 • No. 1 • 2021
Back to Top