SIRUS: Stable and Interpretable RUle Set for Classification

State-of-the-art learning algorithms, such as random forests or neural networks, are often qualified as “black-boxes” because of the high number and complexity of operations involved in their prediction mechanism. This lack of interpretability is a strong limitation for applications involving critical decisions, typically the analysis of production processes in the manufacturing industry. In such critical contexts, models have to be interpretable, i.e., simple, stable, and predictive. To address this issue, we design SIRUS (Stable and Interpretable RUle Set), a new classification algorithm based on random forests, which takes the form of a short list of rules. While simple models are usually unstable with respect to data perturbation, SIRUS achieves a remarkable stability improvement over cutting-edge methods. Furthermore, SIRUS inherits a predictive accuracy close to random forests, combined with the simplicity of decision trees. These properties are assessed both from a theoretical and empirical point of view, through extensive numerical experiments based on our R/C++ software implementation sirus available from CRAN.


Introduction
the resulting quality at the end of the line, and then to increase the process efficiency. Since the quality of the produced entities is often characterized by a pass or fail output, the problem is in fact a classification task, and state-of-the-art learning algorithms can successfully catch patterns of these complex and nonlinear physical phenomena. However, any decision impacting the production process has long-term and heavy consequences, and therefore cannot simply rely on a blind stochastic modelling. As a matter of fact, a deep physical understanding of the forces in action is required, and this makes black-box algorithms inappropriate. In a word, models have to be interpretable, i.e., provide an understanding of the internal mechanisms that build a relation between inputs and outputs, to provide insights to guide the physical analysis. This is for example typically the case in the aeronautics industry, where the manufacturing of engine parts involves sensitive casting and forging processes. Interpretable models allow us to gain knowledge on the behavior of such production processes, which can lead, for instance, to identify or fine-tune critical parameters, improve measurement and control, optimize maintenance, or deepen understanding of physical phenomena. In the following paragraphs, we deepen the discussion about the definition of interpretability to highlight the limitations of the most popular interpretable nonlinear models: decision trees and rule algorithms (Guidotti et al., 2018). Despite their high predictivity and simple structure, these methods are unstable, which is a strong operational limitation. The goal of this article is to introduce SIRUS (Stable and Interpretable RUle Set), an interpretable rule classification algorithm which considerably improves stability over state-of-the-art methods, while preserving their simple structure, accuracy, and computational complexity.
As stated in Rüping (2006), Lipton (2016), Doshi-Velez andKim (2017), or Murdoch et al. (2019), to date, there is no agreement in statistics and machine learning communities about a rigorous definition of interpretability. There are multiple concepts behind it, many different types of methods, and a strong dependence on the area of application and the audience. Here, we focus on models intrinsically interpretable, which directly provide insights on how inputs and outputs are related, as opposed to the post-processing of black-box models. In that case, we argue that it is possible to define minimum requirements for interpretability through the triptych "simplicity, stability, and predictivity", in line with the framework recently proposed by Yu and Kumbier (2019). Indeed, in order to grasp how inputs and outputs are related, the structure of the model has to be simple. The notion of simplicity is implied whenever interpretability is invoked (e.g., Rüping, 2006;Freitas, 2014;Letham, 2015;Letham et al., 2015;Lipton, 2016;Ribeiro et al., 2016;Murdoch et al., 2019) and essentially refers to the model size, complexity, or the number of operations performed in the prediction mechanism. Yu (2013) defines stability as another fundamental requirement for interpretability: conclusions of a statistical analysis have to be robust to small data perturbations to be meaningful. Indeed, a specific analysis is likely to be run multiple times, eventually adding a small new batch of data, and an interpretable algorithm should be insensitive to such modifications. Otherwise, unstable models provide us with a partial and arbitrary analysis of the underlying phenomena, and arouses distrust of the domain experts. Finally, if the predictive accuracy of an interpretable model is significantly lower than the one of a state-of-the-art black-box algorithm, it clearly misses strong patterns in the data and will therefore be useless, as explained in Breiman (2001b). For example, the trivial model that outputs the empirical mean of the observations for any input is simple, stable, but brings in most cases no useful information. Thus, we add a good predictivity as an essential requirement for interpretability.
Decision trees are a class of supervised learning algorithms that recursively partition the input space and make local decisions in the cells of the resulting partition. Trees can model highly nonlinear patterns while having a simple structure, and are therefore good candidates when interpretability is required. However, trees are unstable to small data perturbations (Oates and Jensen, 1997;Guidotti and Ruggieri, 2019). More precisely, as explained in Breiman (2001b): by randomly removing only 2 − 3% of the training data, the tree structure can be quite different, which is a strong limitation to their practical use. Another class of supervised learning methods that can model nonlinear patterns while retaining a simple structure are the so-called rule models. As such, a rule is defined as a conjunction of constraints on input variables, which form a hyperrectangle in the input space where the estimated output is constant. A collection of rules is combined to form a model. Here, the term "rule" does not stand for "classification rule" but, as is traditional in the rule learning literature, to a piecewise constant estimate that simply reads "if conditions on x, then response, else default response". Despite their simplicity and excellent predictive skills, rule algorithms are unstable and, from this point of view, share the same limitation as decision trees (Letham et al., 2015;Murdoch et al., 2019).
In line with the above, we design SIRUS in the present paper, a new rule classification algorithm which inherits an accuracy close to random forests and the simplicity of decision trees, while having a stable structure. The core aggregation principle of random forests is kept, but instead of aggregating predictions, SIRUS focuses on the probability that a given hyperrectangle (i.e., a node) is contained in a randomized tree. The nodes with the highest probability are robust to data perturbation and represent strong patterns. They are therefore selected to form a stable rule ensemble model. Here, we provide a first illustration of SIRUS with a simple and real case: the Titanic dataset (Piech, 2016). The survival status of 887 passengers are recorded, as well as various personal characteristics: age, sex, class, number of siblings and parents aboard, and the paid fare. SIRUS outputs the following simple set of 7 rules, which enables to grasp at a glance the main patterns to explain passenger survival: Average survival rate p s = 39%. To generate the prediction for a new query point x, SIRUS checks for each rule whether the conditions are satisfied to assign one of the two possible p s output values. Let us say for example that x (sex) is female, then x satisfies the condition of the first rule, which returns p s = 74%. Next, the 7 rule outputs are averaged to provide the predicted probability of survival for x. The model is stable: when a 10-fold cross-validation is run to simulate data perturbation, 5 to 6 rules are consistent across two folds in average. The model error (1-AUC) is 0.17, close to the 0.13 of random forests, whereas simplicity is drastically increased: 7 rules versus about 10 4 operations for a forest prediction.
First, we review the main rule algorithms and present their mechanism principles in Section 2. Next, Section 3 is devoted to the detailed description of SIRUS. One of the main contributions of this work is the development of a software implementation, via the R/C++ package sirus (Benard and Wright, 2020) available from CRAN, based on ranger, a high-performance random forest implementation (Wright and Ziegler, 2017). In Section 4, we show that the good empirical behavior of SIRUS is theoretically understood by proving its asymptotic stability. Then, in Section 5, we illustrate the efficiency of our algorithm through numerical experiments on real datasets. Finally, Section 6 summarizes the main contributions of the article and provides directions for future research.

Related Work
As stated in the introduction, SIRUS has two types of competitors: decision trees and rule algorithms. More precisely, the latter can further be split into three different kinds: classical rule algorithms based on greedy heuristics, those built on top of frequent pattern mining algorithms, and those extracted from tree ensembles.
Decision trees may be the most popular competitors of SIRUS because of their simple structure. The main algorithms are CART (Breiman et al., 1984) and C5.0 (Quinlan, 1992). However, trees are unstable as we have already highlighted. A widespread method to stabilize decision trees is bagging (Breiman, 1996), in which multiple trees are grown on perturbed data and aggregated together. Random forests is an algorithm developped by Breiman (2001a) that improves over bagging by randomizing the tree construction. Predictions are stable, accuracy is increased, but the final model is unfortunately a black box. Thus, simplicity of trees is lost, and some post-treatment mechanisms are needed to understand how random forests make their decisions. Nonetheless, even if they are useful, such treatments only provide partial information and can be difficult to operationalize for critical decisions (Rudin, 2018). For example, variable importance (Breiman, 2001a(Breiman, , 2003a identifies variables that have a strong impact on the output, but not which inputs values are associated to output values of interest. Similarly, local approximation methods such as LIME (Ribeiro et al., 2016) or Tolomei et al. (2017) do not provide insights on the global relation.
Rule learning originates from the influential AQ system of Michalski (1969). Many algorithms based on greedy heuristics were subsequently developped in the 1980's and 1990's, including Decision List (Rivest, 1987, CN2 (Clark and Niblett, 1989), FOIL (First-Order Inductive Learner, Quinlan, 1990;Quinlan and Cameron-Jones, 1995), IREP (Incremental Reduced Error Pruning, Fürnkranz and Widmer, 1994), RIPPER (Repeated Incremental Pruning to Produce Error Reduction, Cohen, 1995), PART (Partial Decision Trees, Frank and Witten, 1998), SLIPPER (Simple Learner with Iterative Pruning to Produce Error Reduction, Cohen and Singer, 1999), LRI (Leightweight Rule Induction, Weiss and Indurkhya, 2000), and EN-DER (Ensemble of Decision Rules, Dembczyński et al., 2010). Since these methods are based on greedy heuristics, they are computationally fast, but similarly to decision trees, they are unstable and their accuracy is often limited.
At the end of the 1990's a new type of rule algorithms based on frequent pattern mining is introduced with CBA (Classification Based on Association Rules, Liu et al., 1998), then extended with CPAR (Classification based on Predictive Association Rules, Yin and Han, 2003). Frequent pattern mining is originally used to identify frequent occurrences in database mining. Since the output Y ∈ {0, 1} is discrete and the input data can be discretized, we can generate candidate rules for classification by identifying frequent patterns associated with each output label. This exhaustive search for association rules is computationally costly (exponential with the input dimension), and efficient heuristics are used, essentially Apriori (Agrawal et al., 1993) and Eclat (Zaki et al., 1997). The rule aggregation mechanism is specific to each algorithm. More recently, BRL (Bayesian Rule List, Letham et al., 2015) uses a more sophisticated Bayesian framework for the rule aggregation than the simple approach of CBA and CPAR, while IDS (Lakkaraju et al., 2016, Interpretable Decision Sets) uses a multi-objective optimization to select interpretable rules. Finally, CORELS (Angelino et al., 2017, Certifiably Optimal RulE ListS) generates optimal rule lists for categorical data. Interestingly, these methods exhibit quite good stability properties as we will see, higher than decision trees, but on the other hand, their predictive accuracy is worse.
The last decade has seen a resurgence of rule models through powerful algorithms based on rule extraction from tree ensembles, especially RuleFit (Friedman and Popescu, 2008) and Node harvest (Meinshausen, 2010). Notice that SIRUS is also based on this principle. More specifically, RuleFit extracts all the rules of a boosted tree ensemble (Friedman and Popescu, 2003), while Node harvest is based on random forests. Then, the extracted rules are linearly combined in a sparse linear model, respectively a logistic regression with a Lasso penalty (Tibshirani, 1996) for RuleFit, and a constraint quadratic linear program for Node harvest. These two methods have a computational complexity comparable to random forests and SIRUS, since the main step of all these algorithms is to grow a tree ensemble with a large number of trees. However, both algorithms are unstable, and both output quite complex and long lists of rules. Even running RuleFit or Node harvest multiple times on the same dataset produces quite different rule lists because of the randomness in the tree ensemblessee Appendix A.1. On the other hand, SIRUS is built to have its structure converged for the given dataset, as explained later in Section 3.
To the best of our knowledge, the signed iterative random forest method (s-iRF, Kumbier et al., 2018) is the only procedure that tackles both rule learning and stability. Using random forests, s-IRF manages to extract stable signed interactions, i.e., feature interactions enriched with a thresholding behavior for each variable, lower or higher, but without specific thresholding values. Therefore, s-IRF can be difficult to operationalize since it does not provide any specific input thresholds, and thus no precise information about the influence of input variables. On the other hand, an explicit rule model identifies specific regions of interest in the input space.

SIRUS Algorithm
Within the general framework of supervised (binary) classification, we assume to be given an ) is a random vector taking values in R p and Y ∈ {0, 1} is a binary response. Throughout the document, the distribution of (X, Y ) is assumed to be unknown and is denoted by P X,Y . For x ∈ R p , our goal is to accurately estimate the conditional probability η(x) = P(Y = 1|X = x) with few simple and stable rules.
To tackle this problem, SIRUS first builds a (slightly modified) random forest. Next, each hyperrectangle of each tree of the forest is turned into a simple decision rule, and the collection of these elementary rules is ranked based on their frequency of appearance in the forest. Finally, the most significant rules are retained and are averaged together to form an ensemble model. We describe the four steps of SIRUS algorithm in the following paragraphs: the rule generation, rule selection, rule post-treatment, and the rule aggregation. This section ends with a discussion of SIRUS stability.
Rule generation. SIRUS uses at its core the random forest method (Breiman, 2001a), slightly modified for our purpose. As in the original procedure, each single tree in the forest is grown with a greedy heuristic that recursively partitions the input space using a random variable Θ. The essential difference between our approach and Breiman's one is that, prior to all tree constructions, the empirical q-quantiles of the marginal distributions over the whole dataset are computed: in each node of each tree, the best split can be selected among these empirical quantiles only. This constraint is critical to stabilize the forest structure and keeps almost intact the predictive accuracy, provided q is not too small (typically of the order of 10see the experimental Subsection 5.4). Apart from this difference, the tree growing is similar to Breiman's original procedure. The tree randomization Θ is independent of the sample and has two independent components, denoted by Θ (S) and Θ (V ) , which are respectively used for the subsampling mechanism and randomization of the split direction. Throughout the manuscript, we letq (j) n,r be the empirical r-th q-quantile of {X (j) 1 , . . . , X (j) n }, with typically q = 10. The construction of the individual trees is summarized in Algorithm 1 below.
Algorithm 1 Tree construction 1: Parameters: Number of quantiles q, number of subsampled observations a n , number of eligible directions for splitting mtry. 2: Compute the empirical q-quantiles for each marginal distribution over the whole dataset. 3: Subsample with replacement a n observations, indexed by Θ (S) . Only these observations are used to build the tree. 4: Initialize the cell H as the root of the tree. 5: Draw uniformly at random a subset Θ (V ) ⊂ {1, . . . , p} of cardinality mtry. 6: For all j ∈ Θ (V ) , compute the CART-splitting criterion at all empirical q-quantiles of X (j) that split the cell H into two non-empty cells. 7: Choose the split that maximizes the CART-splitting criterion. 8: Recursively repeat lines 5 − 7 for the two resulting children cells H L and H R .
(1, 7, L)} Figure 1: Example of a root node R 2 partitionned by a randomized tree of depth 2: the tree on the right side, the associated paths and hyperrectangles of length d = 2 on the left side.
The main step of SIRUS is to extract rules from the modified random forest. The cornerstone of this extraction mechanism is the notion of path in a decision tree. Indeed, a path describes the sequence of splits to go from the root of the tree to a specific (inner or terminal) node. Since a hyperrectangle is associated to each node, a rule can be defined as a piecewise constant estimate with this hyperrectangle as support. Therefore, to rigorously define the rule extraction, we introduce the symbolic representation of a path in a tree. We insist that such definition is valid for both terminal leaves and inner nodes, which are all used by SIRUS. To begin, we follow the example shown in Figure 1 with a tree of depth 2 partitioning the input space R 2 . For instance, let us consider the node P 6 defined by the sequence of two splits X (2) i ≥q (2) n,4 and X (1) i ≥q (1) n,7 . The first split is symbolized by the triplet (2, 4, R), whose components respectively stand for the variable index 2, the quantile index 4, and the right side R of the split. Similarly, for the second split we cut coordinate 1 at quantile index 7, and pass to the right. Thus, the path to the considered node is defined by P 6 = {(2, 4, R), (1, 7, R)}. Also notice that the first split already defines the path P 2 = {(2, 4, R)}, associated to the right inner node at the first level of the tree. Of course, this generalizes to each path P of length d under the symbolic compact form where, for k ∈ {1, . . . , d}, the triplet (j k , r k , s k ) describes how to move from level (k − 1) to level k, with a split using the coordinate j k ∈ {1, . . . , p}, the index r k ∈ {1, . . . , q − 1} of the corresponding quantile, and a side s k = L if we go the the left and s k = R if we go to the right. The set of all possible such paths is denoted by Π. It is important to note that Π is in fact a deterministic (that is, non random) quantity, which only depends upon the dimension p and the order q of the quantiles. Of course, given a path P ∈ Π one can recover the hyperrectangle (i.e., the tree node)Ĥ n (P) associated with P and the entire dataset D n via the correspondencê Finally, an elementary ruleĝ n,P can be defined fromĤ n (P) as a piecewise constant estimate: g n,P (x) returns the empirical probability that the output Y is of class 1 conditional on whether the query point x belongs toĤ n (P) or not. Thus, the ruleĝ n,P associated to the path P ∈ Π is formally defined by using the convention 0/0 = 0, and where N n (Ĥ n (P)) is the number of observations in the node associated with P. This formal definition can be illustrated with the Titanic dataset presented in the introduction. For the fourth rule, fare is the 6th variable and sinceq (6) n,4 = 10.5, the corresponding path is P = {(6, 4, L)}, and the associated rule is thuŝ Finally, a Θ-random tree generates a collection of paths in Π, one for each internal and terminal nodes. In the sequel, we let T (Θ, D n ) be the list of such extracted paths, a random subset of Π.
Rule selection. Using our modified random forest algorithm, we are able to generate a large number M of trees, randomized by Θ 1 , . . . , Θ M , i.i.d. copies of the generic variable Θ, and then to extract a large collection of rules. Since we are interested in selecting the most important rules, i.e., those which represent strong patterns between the inputs and the output, we select rules that are shared by a large portion of trees. Such occurrence frequency is formally defined byp which is the Monte-Carlo estimate of the probability that a path P belongs to a Θ-random tree, that is p n (P) = P(P ∈ T (Θ, D n )|D n ).
As a general strategy, once the modified random forest has been built, we draw the list of all paths that appear in the forest and only retain those that occur with a frequency larger than the threshold p 0 ∈ (0, 1), the only influential parameter of SIRUS-see Subsection 5.4 for its tuning procedure. We are thus interested in the set of the extracted pathŝ P M,n,p 0 = {P ∈ Π :p M,n (P) > p 0 }.
An important feature of SIRUS algorithm is to stop the growing of the forest with an appropriate number of trees M . Although the right order of magnitude for M is required, no fine 8 tuning is necessary. Indeed, the uncertainty of the importance estimatep M,n (P) of each rule decreases with M , whereas the computational cost linearly increases with M . Thus, to obtain a robust rule extraction, M needs to be high enough to make the uncertainty ofp M,n (P) negligible. More precisely, M is set to get the same list of selected rulesP M,n,p 0 when SIRUS is run multiple times on the same dataset D n . On the other hand, M should be small enough to avoid useless computations. Therefore, the growing of the forest is automatically stopped when 95% of the selected rules would be shared by a new run of SIRUS on D n in average, as it is possible to derive a simple stopping criterion based on the properties of the estimateŝ p M,n (P)-all the technical details are provided in Subsection 5.4. A random forest is usually built with around 500 trees, as the predictive accuracy cannot be significantly increased by adding more trees. SIRUS typically grows 10 times more trees to obtain a robust rule extraction.
Besides, we insist that the quantile discretization is critical for the rule selection. The expected value of the rule importance is E[p M,n (P)] = P(P ∈ T (Θ, D n )), but without the discretization, the list of extracted paths from a random tree T (Θ, D n ) takes values in an uncountable space when at least one component of X is a continuous random variable, and therefore the above quantity is null, making the path selection procedure unstable with respect to data perturbation.
Rule post-treatment. By construction, there is some redundancy in the list of rules generated by the set of distinct pathsP M,n,p 0 . The hyperrectangles associated with the paths extracted from a Θ-random tree overlap, and so the corresponding rules are linearly dependent. Therefore a post-treatment to filterP M,n,p 0 is needed to remove redundancy and obtain a compact rule model. The general idea is straightforward: if the rule associated with the path P ∈P M,n,p 0 is a linear combination of rules associated with paths with a higher frequency in the forest, then P is removed fromP M,n,p 0 .
To illustrate the post-treatment, let the tree of Figure 1 be the Θ 1 -random tree grown in the forest. Since the paths of the first level of the tree, P 1 and P 2 , always occur in the same trees, we havep M,n (P 1 ) =p M,n (P 2 ). If we assume these quantities to be greater than p 0 , then P 1 and P 2 both belong toP M,n,p 0 . However, by construction, P 1 and P 2 are associated with the same rule, and we therefore enforce SIRUS to keep only P 1 inP M,n,p 0 . Each of the paths of the second level of the tree, P 3 , P 4 , P 5 , and P 6 , can occur in many different trees, and their associatedp M,n are distinct (except in very specific cases). Assume for example thatp M,n (P 1 ) >p M,n (P 4 ) >p M,n (P 5 ) >p M,n (P 3 ) >p M,n (P 6 ) > p 0 . Sincê g n,P 3 is a linear combination ofĝ n,P 4 andĝ n,P 1 , P 3 is removed. Similarly P 6 is redundant with P 1 and P 5 , and it is therefore removed. Finally, among the six paths of the tree, only P 1 , P 4 , and P 5 are kept in the listP M,n,p 0 .
Rule aggregation. Now, the resulting small set of rulesP M,n,p 0 is combined to form a simple, compact, and stable rule classification model. We simply average the set of elementary rules {ĝ n,P : P ∈P M,n,p 0 } that have been selected in the first steps of SIRUS. The aggregated estimateη M,n,p 0 (x) of η(x) is thus defined bŷ Finally, the classification procedure assigns class 1 to an input x if the aggregated estimatê η M,n,p 0 (x) is above a given threshold, and class 0 otherwise. In the introduction, we presented an example of a list of 7 rules for the Titanic dataset. In this case, for a new input x,η M,n,p 0 (x) is simply the average of the output probability of survival p s over the 7 selected rules.
In past works on rule ensemble models, such as RuleFit (Friedman and Popescu, 2008) and Node harvest (Meinshausen, 2010), rules are also extracted from a tree ensemble and then combined together through a regularized linear model. In our case, it happens that the parameter p 0 alone is enough to control sparsity. Indeed, in our experiments, we observe that adding such linear model in the aggregation method hardly increases the accuracy and hardly reduces the size of the final rule set, while it can significantly reduce stability, add a set of coefficients that makes the model less straightforward to interpret, and requires more intensive computations. We refer to the experiments in Appendix A.3 for a comparison betweenη M,n,p 0 defined a as simple average (3.3) versus a definition with a logistic regression.
Categorical and numerical discrete variables. For the sake of clarity, the description of SIRUS algorithm is limited to the case of numerical continuous variables. However, SIRUS can obviously handle numerical discrete and categorical data, as it is the case for random forests. On one hand, numerical discrete variables are left untouched since the number of possible split points is already finite, and the rule definition introduced for continuous variables also applies. On the other hand, we naturally extend the rule definition for categorical variables to "if X (1) is category 1 or 2 then response else default response"-see the Titanic dataset example in the introduction. Originally, categorical variables are efficiently handled in trees by transformation in ordered variables. Such ordering of categories is done with respect to the output mean for each category-see Breiman et al. (1984);Friedman et al. (2001), and we follow ranger implementation. Notice that trees are likely to often cut on categorical variables with a high number of categories, as highlighted in Strobl et al. (2006). Consequently, SIRUS is likely to output irrelevant rules associated to such categorical variables. Thus, it is best to discard categorical variables with a high number of categories, or transform them by regrouping categories or using one-hot-encoding before running SIRUS. Finally, note that ordinal variables (e.g. X (1) ∈ {small, medium, big}) are treated like categorical variables.
Stability. The three main properties to assess the interpretability of SIRUS are simplicity, stability, and predictivity, as already stated. On one hand, a measure of simplicity is naturally provided by the number of rules, and predictivity is given by the missclassification rate or the AUC. On the other hand, stability requires a more thorough discussion. In the statistical learning theory, stability refers to the stability of predictions (e.g., Vapnik, 1998). In particular, Rogers and Wagner (1978), Devroye and Wagner (1979), Bousquet andElisseeff (2002), andPoggio et al. (2004) show that stability and predictive accuracy are closely connected. In our case, we are more concerned by the stability of the internal structure of the model, and, to our knowledge, no general definition exists. So, we state the following tentative definition: a rule learning algorithm is stable if two independent estimations based on two independent samples result in two similar lists of rules. Thus, given a new sample D n independent of D n , we definep M,n (P) and the corresponding set of pathsP M,n,p 0 based on a modified random forest drawn with a parameter Θ independent of Θ. Then, we measure the stability of SIRUS by the proportion of rules shared by the two setsP M,n,p 0 andP M,n,p 0 , selected over these two runs of SIRUS on independent samples. We take advantage of a dissimilarity measure between two sets, the so-called Dice-Sorensen index, often used to assess the stability of variable selection methods (Chao et al., 2006;Zucknick et al., 2008;Boulesteix and Slawski, 2009;He and Yu, 2010;Alelyani et al., 2011). This index is defined bŷ S M,n,p 0 = 2 P M,n,p 0 ∩P M,n,p 0 P M,n,p 0 + P M,n,p 0 (3.4) with the convention 0/0 = 1. This is a measure of stability taking values between 0 and 1: if the intersection betweenP M,n,p 0 andP M,n,p 0 is empty, thenŜ M,n,p 0 = 0, while if P M,n,p 0 =P M,n,p 0 , thenŜ M,n,p 0 = 1. Notice that it is possible to use other metrics to assess the distance between two finite sets (Zucknick et al., 2008): the Jaccard Index is another popular example. Although the stability values slightly vary with a different definition, both the asymptotic stability of SIRUS-see Section 4-and the empirical stability comparisons between algorithms-see Section 5-are insensitive to the stability metric choice.

Theoretical Analysis of Stability
Among the three minimum requirements for interpretability defined in Section 1, simplicity and predictivity are quite easily met for rule models (Cohen and Singer, 1999;Meinshausen, 2010;Letham et al., 2015). On the other hand, as Letham et al. (2015) recall, building a stable rule ensemble is challenging. Therefore the main goal of this section is to prove the asymptotic stability of SIRUS, i.e., provided that the sample size is large enough, SIRUS systematically outputs the same list of rules when run multiple times with independent samples. On the other hand, we also argue that existing tree-based rule algorithms are unstable by design.
In order to show the asymptotic stability of SIRUS, we first need to introduce formal definitions of the mathematical elements involved in the empirical algorithm. We additionally define the theoretical counterpart of SIRUS, an abstract procedure which is not based on the sample D n , but only on the unknown distribution P X,Y . Next, we will prove the stochastic convergence of SIRUS towards its theoretical counterpart. This means that the list of selected rules does not depend on the training data D n , but only on P X,Y , provided that the sample size is large enough. Therefore, the same list of rules is output when SIRUS is run multiple times on independent samples. This mathematical analysis highlights that the remarkable stable behavior of SIRUS in practice has theoretical groundings, and that the discretization of the cut values with the quantiles, as well as using random forests, are the cornerstones to stabilize rule models extracted from tree ensembles.
Empirical algorithm. First, we define the empirical CART-splitting criterion used to find the optimal split at each node of each tree of the forest. In our context of binary classification where the output Y ∈ {0, 1}, maximizing the so-called empirical CART-splitting criterion is equivalent to maximizing the criterion based on Gini impurity (see, e.g., Biau and Scornet, 2016). More precisely, at node H and for a cut performed along the j-th coordinate at the empirical r-th q-quantileq (j) n,r , this criterion reads where Y H is the average of the Y i 's such that X i ∈ H, N n (H) is the number of data points X i falling into H, Note that, for the ease of reading, (4.1) is defined for a tree built with the entire dataset D n without resampling. As it is often the case in the theoretical analysis of random forests, we assume throughout this section that the subsampling of a n observations to build each tree is done without replacement to alleviate the mathematical analysis.
Recall that the rule selection is based on the probability p n (P) that a Θ-random tree of the forest contains a particular path P ∈ Π, that is, p n (P) = P(P ∈ T (Θ, D n )|D n ), and that the Monte-Carlo estimatep M,n (P) of p n (P) is directly computed using the random forest, and takes the formp Clearly,p M,n (P) is a good estimate of p n (P) when M is large since, by the law of large numbers, conditional on D n , lim M →∞p M,n (P) = p n (P) a.s.
We also see thatp M,n (P) is unbiased since E[p M,n (P)|D n ] = p n (P).
Theoretical algorithm. Next, we define all theoretical counterparts of the empirical quantities involved in SIRUS, which do not depend on D n but only on the unknown distribution P X,Y of (X, Y ). For a given integer q ≥ 2 and r ∈ {1, . . . , q − 1}, the theoretical q-quantiles are defined by i.e., the population version ofq (j) n,r defined in (4.2). Similarly, for a given hyperrectangle H ⊆ R p , we let the theoretical CART-splitting criterion be Based on this criterion, we denote by T (Θ) the list of all paths contained in the theoretical tree built with randomness Θ, where splits are chosen to maximize the theoretical criterion L instead of the empirical one L n , defined in (4.1). We stress again that the list T (Θ) does not depend upon D n but only upon the unknown distribution of (X, Y ). Next, we let p (P) be the theoretical counterpart of p n (P), that is p (P) = P(P ∈ T (Θ)), and finally define the theoretical set of selected paths P p 0 by {P ∈ Π : p (P) > p 0 } (with the same post-treatment as for the empirical procedure-see Section 3). Notice that, in the case where multiple splits have the same value of the theoretical CART-splitting criterion, one is randomly selected.
Consistency of the path selection. The construction of the rule ensemble model essentially relies on the path selection and on the estimatesp M,n (P), P ∈ Π. Therefore, our theoretical analysis first focuses on the asymptotic properties of those estimates in Theorem 1. Our consistency results hold under conditions on the subsampling rate a n and the number of trees M n , together with some assumptions on the distribution of the random vector X. They are given below.
(A1) The subsampling rate a n satisfies lim n→∞ a n = ∞ and lim (A3) X has a strictly positive density f with respect to the Lebesgue measure. Furthermore, for all j ∈ {1, . . . , p}, the marginal density f (j) of X (j) is continuous, bounded, and strictly positive.
We can now state the consistency of the occurrence frequency of each possible path P ∈ Π in the modified random forest.

13
Stability. The only source of randomness in the selection of the rules lies in the estimateŝ p Mn,n (P). Since Theorem 1 states the consistency of such an estimation, the path selection consistency follows, for all threshold values p 0 that do not belong to the finite set U = {p (P) : P ∈ Π} of all theoretical probabilities of appearance for each path P. Indeed, if p 0 = p (P) for some P ∈ Π, then P(p Mn,n (P) > p 0 ) does not necessarily converge to 0 and the path selection can be inconsistent. Then, we can deduce that SIRUS is asymptotically stable in the following Corollary 1.
Competitors. As we will discuss further in the experimental Section 5, CART, C5.0, Rule-Fit, and Node harvest are top competitors of SIRUS, which are also based on rule extraction from trees. However, these algorithms do not include a pre-processing step of discretization, which makes them unstable by design. To see this, we first adapt the definition of an extracted path without discretization as P = {(j k , z k , s k ), k = 1, . . . , d}, where z k ∈ R is now the cutting value of the k-th split. For any rule algorithm, we also defineŜ M,n as the proportion of rules shared between the output rule lists over two runs with two independent samples. Note that M = 1 for CART and C5.0, and as already mentioned, it is possible to define a rule algorithm from CART, by extracting its nodes, as in C5.0. Thus, we obtain that for any tree-based rule algorithm,Ŝ M,n = 0 almost surely. Indeed, since the input X takes continuous values (Assumption (A3)) and decision trees can cut at the middle of two observations in all directions, the probability that a cutting value from the tree built with D n and one from the tree built with D n are equal is null.
However, recall that in the experiments, we include a pre-processing discretization step to stabilize competitors and enable fair comparisons. With this modification, they reach a value ofŜ M,n > 0, but still not in par with SIRUS. This shows that the high stability improvement of SIRUS does not only come from the discretization, but mainly from the rule selection procedure, based on the probability of the rule occurrence in a random tree.
Proofs. The proof of Theorem 1 is to be found in Appendix C. It is however interesting to give a sketch of the proof here. Corollary 1 is a direct consequence of Theorem 1, the full proof follows.
Sketch of proof of Theorem 1. The consistency is obtained by showing thatp Mn,n (P) is asymptotically unbiased with a null variance. The result for the variance is quite straightforward since the variance ofp Mn,n (P) can be broken into two terms: the variance generated by the Monte-Carlo randomization, which goes to 0 as the number of trees increases (Assumption (A2)), and the variance of p n (P). Following Mentch and Hooker (2016), since p n (P) is a bagged estimate it can be seen as an infinite-order U-statistic, and a classic bound on the variance of U-statistics gives that V[p n (P)] converges to 0 if lim n→∞ an n = 0, which is true 14 by Assumption (A1). Next, proving thatp Mn,n (P) is asymptotically unbiased requires to dive into the internal mechanisms of the random forest algorithm. To do this, we have to show that the CART-splitting criterion is consistent (Lemma 3) and asymptotically normal (Lemma 4) when cuts are limited to empirical quantiles (estimated on the same dataset) and the number of trees grows with n. When cuts are performed on the theoretical quantiles, the law of large numbers and the central limit theorem can be directly applied, so that the proof of Lemmas 3 and 4 boils down to showing that the difference between the empirical CART-splitting criterion evaluated at empirical and theoretical quantiles converges to 0 in probability fast enough. This is done in Lemma 2 thanks to Assumption (A3).
Proof of Corollary 1. The first result is a consequence of Theorem 1 since Since p 0 / ∈ U , we deduce from Theorem 1 and the continuous mapping theorem that, for all P ∈ Π, lim n→∞ 1p Mn,n (P)>p 0 = 1 p (P)>p 0 in probability.

Experiments
We begin this section by providing overall experimental settings. Next, we focus on a case study to illustrate SIRUS with an industrial process example: the semi-conductor manufacturing process SECOM data (Dua and Graff, 2017). In particular, it shows the excellent performance of SIRUS on real data in a noisy and high-dimensional setting. In Subsection 5.3, we use 19 UCI datasets (Dua and Graff, 2017) to perform extensive comparisons between SIRUS and its main competitors. We show that SIRUS produces much more stable rule lists, while preserving a predictive accuracy and computational complexity comparable to the top competitors. Finally, in Subsection 5.4, we detail the tuning procedure of the single hyperparameter p 0 , along with a thorough discussion on the design of SIRUS. In particular, the cut limitations to the quantiles and the number of constraints in the selected rules are analyzed, and we also provide the stopping criterion for the number of trees.

Experiment Description
Performance metrics. We first introduce relevant metrics to assess the three interpretability properties in the experiments. By definition, the size (i.e., the simplicity) of the rule ensemble is the number of selected rules, i.e., |P M,n,p 0 |. To measure the error, 1-AUC is used and estimated by 10-fold cross-validation (repeated 10 times for robustness and standard deviation estimates). With respect to stability, an independent dataset is not available for real data to computeŜ M,n,p 0 as defined in (3.4) in the Section 3. Nonetheless, we can take advantage of the cross-validation process to compute a stability metric: the proportion of rules shared by two models built during the cross-validation, averaged over all possible pairs (Guidotti and Ruggieri, 2019).

Datasets.
We have conducted experiments on the SECOM data, as well as 19 diverse public datasets from the UCI repository (Dua and Graff, 2017; data is described in Table 1). These experiments aim at illustrating the good behavior of SIRUS over its competitors in various settings. To compare stability of the different methods, data is discretized using the 10empirical quantiles for each continuous variable and the same stability metric is used for all algorithm comparisons. For simplicity and predictivity metrics, we do not apply this preprocessing step of discretization, unless the algorithm only handles categorical data.
Competitors. For decision trees, we run both CART and C5.0, and trees are pruned to maximize their performance. Notice that, to enable simplicity and stability comparisons for CART, a list of rules is extracted from its nodes, as it is originally possible for C5.0. For rule algorithms based on greedy heuristics, we evalute RIPPER, PART, and FOIL. Next, for rule algorithms based on tree ensembles, we evaluate RuleFit and Node harvest. Note that categorical features are transformed in multiple binary variables as it is required by the two software implementations, and RuleFit is limited to rule predictors. For RuleFit, the lasso penalty is tuned by cross-validation as defined in Friedman and Popescu (2008). As advertised in Meinshausen (2010), Node harvest does not require parameter tuning by default, but it is also possible to add a regularization term to reduce the model size. We use the same tuning procedure as for SIRUS to maximize accuracy with the smallest possible model-see Subsection 5.4. Finally, for rule algorithms based on frequent pattern mining, we run the experiments for CBA and BRL. Note that we use default settings for BRL, since modifying its parameters does not significantly improve accuracy and can hurt stability. We use available R implementations: rpart (Therneau and Atkinson . We also use our R/C++ software implementation sirus (Benard and Wright, 2020) (available from CRAN), adapted from ranger, a fast random forest implementation (Wright and Ziegler, 2017). Besides, notice that for SIRUS experiments, we use the default settings of random forests well known for their excellent behavior, in particular mtry = p 3 . We set q = 10 quantiles and tune p 0 as specified in Subsection 5.4.

Case Study: Manufacturing Process Data
SIRUS is run on a real manufacturing process of semi-conductors, the SECOM dataset (Dua and Graff, 2017). Data is collected from sensors and process measurement points to monitor the production line, resulting in 590 numeric variables. Each of the 1567 data points represents a single production entity associated with a pass or fail output (0/1) for in-house line testing. As it is often the case for a production process, the dataset is unbalanced and contains 104 fails, i.e., a failure rate p f of 6.6%. We proceed to a simple pre-processing of the data: missing values (about 5% of the total) are replaced by the median. Figure 2 shows predictivity versus the number of rules when p 0 varies, with the optimal p 0 displayed. Notice that the relation between p 0 and the number of rules is monotone by construction, but also highly nonlinear. Therefore, we use the number of rules for the xaxis of Figure 2 to improve readability. The 1-AUC value is 0.30 for SIRUS (for the optimal p 0 = 0.04), 0.29 for Breiman's random forests, and 0.48 for a pruned CART tree. Thus, in that case, CART tree predicts no better than the random classifier, whereas SIRUS has a similar accuracy to random forests. The final model has 6 rules and a stability of 0.72, i.e., in average 4 to 5 rules are shared by 2 models built in a 10-fold cross-validation process, simulating data perturbation. By comparison, Node harvest outputs 36 rules with a value of 0.32 for 1-AUC.
Finally, the output of SIRUS may be displayed in the simple and interpretable form of Figure 3, the output in the R console of the package sirus for the SECOM data. Such a rule model enables to catch immediately how the most relevant variables impact failures. Among the 590 variables, 5 are enough to build a model as predictive as random forests, and such a selection is robust. Other rules alone may also be informative, but they do not add additional information to the model, since predictive accuracy is already minimal with the 6  selected rules. Then, production engineers should first focus on those 6 rules to investigate an improved setting of the production process. We insist that the stability of the output rule list is critical in practice. Indeed, the algorithm may be run multiple times during the analysis, eventually with an additional small new batch of data. The output rule list should be quite insensitive to such perturbation: domain experts are skeptical of unstable results, which are the symptoms of a partial and arbitrary modelling of the true phenomenon. SIRUS is stable, but it is not the case for decision trees or existing rule algorithms, as we show in the next subsection and illustrate in Appendix A.1.

Improvement over Competitors
Overall, we observe that SIRUS provides a high improvement of stability compared to stateof-the-art rule algorithms, while preserving the other properties. For the top competitors, experimental results are gathered in Table 2 for model size, Table 3 for stability, and Table 4 for predictive accuracy. Experiments for additional competitors are provided in Appendix A.2 in Tables 7, 8 and 9. Standard deviations are made negligible by averaging metrics over 10 repetitions of the cross-validation and are not displayed in the tables to increase readability. Figure 4 provides an example for the dataset "Credit German" of the dependence between predictivity and the number of rules when p 0 varies. In that case, the minimum of 1-AUC is about 0.25 for SIRUS, 0.20 for Breiman's forests, and 0.29 for CART tree. For the chosen p 0 , SIRUS returns a compact set of 22 rules and its stability is 0.66. Figure 5 provides another example of the good practical performance of SIRUS with the "Heart Statlog" dataset. Here, the predictivity of random forests is reached with 16 rules, with a stability of 0.83, i.e., about 13 rules are consistent between two different models built in a 10-fold cross-validation. Thus, the final models are simple, quite robust to data perturbation, and have a predictive accuracy close to random forests.
We can draw the following conclusions from the experimental comparisons with competitors, displayed in Tables 2, 3, and 4. SIRUS produces more stable and predictive rule lists than decision trees, for a comparable simplicity, but at the price of a higher computational complexity since many trees are grown. SIRUS produces much more stable and shorter rule lists than RuleFit and Node harvest, for a comparable accuracy and computational complexity. Classical rule algorithms exhibit similar properties as decision trees: a smaller computational complexity, but a high instability and a reduced predictivity. Finally, algorithms based on frequent pattern mining exhibit quite good stability properties, higher than for the other types of competitors. On the other hand, their predictive accuracy is worse than decision trees. Experiments in Tables 2, 3, and 4 show that SIRUS exhibits a high stability and predictivity improvement over these methods. Besides, simplicity varies across algorithms: CBA produces much longer rule lists than SIRUS, whereas BRL generates shorter models.

SIRUS Parameters
SIRUS relies on a single tuning hyperparameter: the selection threshold p 0 involved in the definition ofP M,n,p 0 to filter the most important rules, which therefore controls the simplicity of the model, and consequently also its accuracy and stability. On the other hand, SIRUS is      not very sensitive to the other parameters: the number of trees, the number of quantiles, and the tree depth. Therefore, they do not require fine tuning, and we simply set efficient default values as explained below.
Tuning of SIRUS. This parameter p 0 should be set to optimize a tradeoff between the number of rules, stability, and accuracy. In practice, it is difficult to settle such a criterion, and we choose to optimize p 0 to maximize the predictive accuracy with the smallest possible set of rules. To achieve this goal, we proceed as follows. The error 1-AUC is estimated by 10-fold cross-validation for a fine grid of p 0 values, defined such that |P M,n,p 0 | varies from 1 to 25 rules. (We let 25 be an arbitrary upper bound on the maximum number of rules, considering that a bigger set is not readable anymore.) The randomization introduced by the partition of the dataset in the 10 folds of the cross-validation process has a significant impact on the variability of the size of the final model. Therefore, in order to get a robust estimation of p 0 , the cross-validation is repeated multiple times (typically 10) and results are averaged. The standard deviation of the mean of 1-AUC is computed over these repetitions for each p 0 of the grid search. We consider that all models within 2 standard deviations of the minimum of 1-AUC are not significantly less predictive than the optimal one. Thus, among these models, the one with the smallest number of rules is selected, i.e., the optimal p 0 is shifted towards higher values to reduce the model size without decreasing predictivity-see Figures 4 and 5 for examples. This approach is very similar to the tuning procedure of the Lasso (Tibshirani, 1996).
Number of trees. The accuracy, stability, and computational cost of SIRUS increase with the number of trees M . Thus, we simply design a stopping criterion to grow the minimum number of trees which ensures that accuracy and stability are higher than 95% of their maximum asymptotic values with respect to M and conditionally on D n . We empirically observe that the stability requirement is met for a much higher number of trees than the accuracy requirement (about 10 times). Therefore, the stopping criterion is only based on stability. More precisely, we require that 95% of the rules are identical across two runs of SIRUS on a given dataset D n in average. Formally, the mean stability E[Ŝ M,n,p 0 |D n ] measures the expected proportion of rules shared by two fits of SIRUS on D n , for fixed n (sample size), p 0 (threshold), and M (number of trees). Thus, the stopping criterion takes the form 1 − E[Ŝ M,n,p 0 |D n ] < α, with typically α = 0.05.
There are two obstacles to operationalize this stopping criterion: its estimation and its dependence to p 0 . We make two approximations to overcome these limitations and give empirical and theoretical evidence of their good practical behavior in Appendix B. First, Theorem 2 in Appendix B.2 provides an asymptotic equivalent with respect to M of 1 − E[Ŝ M,n,p 0 |D n ], that we simply estimate by where Φ(M p 0 , M, p n (P)) is the cdf of a binomial distribution with parameter p n (P), M trials, evaluated at M p 0 . Secondly, ε M,n,p 0 depends on p 0 , whose optimal value is unknown in the first step of SIRUS, when trees are grown. It turns out however that ε M,n,p 0 is not very sensitive   Quantile discretization. In the modified random forest grown in the first step of SIRUS, the split at each tree node is limited to the empirical q-quantiles of each component of X, as described in Section 3. Thus, we check that this modification alone of the forest has little impact on its accuracy. Using the R package ranger, 1-AUC is estimated for each dataset with 10-fold cross-validation for q ∈ {2, 5, 10, 20}. We leave aside datasets with a majority of categorical variables, results are averaged over 10 repetitions of the cross-validation, and displayed in Table 5. Clearly, the decrease of accuracy generated by this discretization is small, and not very sensitive to q, provided that q is not too small. Thus, q = 10 appears to be a good default choice from the experiments. In fact, the small impact of the discretization on the forest error is not surprising: with only p = 10 input variables, the input space is split in a fine grid of 10 10 hyperrectangles for q = 10 quantiles, providing a high flexibility to the modified random forest to identify local patterns.
Tree depth. When SIRUS is fit using fully grown trees, the final set of rulesP M,n,p 0 contains almost exclusively rules made of one or two splits, and rarely of three splits. Although this may appear surprising at first glance, this phenomenon is in fact expected. Indeed, rules made of multiple splits are extracted from deeper tree levels and are thus more sensitive to data perturbation by construction. This results in much smaller values ofp M,n (P) for rules with a high number of splits, and then deletion from the final set of path through the threshold p 0 : P M,n,p 0 = {P ∈ Π :p M,n (P) > p 0 }. To illustrate this, let us consider the following typical example with p = 100 input variables and q = 10 quantiles. There are qp = 100 × 10 = 10 3 possible splits at the root node of a tree, and then 2pq = 2.10 3 paths of one split. Since the left and right paths of one split at the root node are associated to the same rule, there are qp = 10 3 distinct rules of one split, about (2qp) 2 ≈ 10 6 distinct rules of two splits, and about (2qp) 3 ≈ 10 10 distinct rules of three splits. Using only rules of one split is too restrictive since it generates a small model class (a thousand rules for 100 input variables) and does not handle variable interactions. On the other hand, rules of two splits are numerous (about one million) and thus provide a large flexibility to SIRUS. More importantly, since there are 10 billion rules of three splits, a stable selection of a few of them is clearly a difficult task, and such complex rules are naturally discarded by SIRUS.
In the software implementation sirus, the tree depth parameter max.depth is a modifiable input, set to 2 by default to reduce the computational cost while leaving the output list of rules almost untouched as explained above. We conduct experiments where SIRUS is run with a tree depth of 1, 2, and 3, and results are displayed in Table 6. Over the nineteen UCI datasets, rules of three splits appear in SIRUS rule list in only four cases, and a significant accuracy improvement over a tree depth of 2 occurs only once, for the 'Mushrooms' dataset. On the other hand, for all datasets except two, SIRUS outputs rules of two constraints, and predictivity is improved over a tree depth of 1 for half of the datasets. The Titanic example shows how the rule list is drastically simplified by limiting tree depth to 1, lowering the insights provided by SIRUS: Average survival rate p s = 39%.
if sex is male then p s = 19% else p s = 74% if 1 st or 2 nd class then p s = 56% else p s = 24% This analysis of tree depth is not new. Indeed, both RuleFit (Friedman and Popescu, 2008) and Node harvest (Meinshausen, 2010) articles discuss the optimal tree depth for the rule extraction from a tree ensemble in their experiments. They both conclude that the optimal depth is 2. Hence, the same hard limit of 2 is used in Node harvest. RuleFit is slightly less restrictive: for each tree, its depth is randomly sampled with an exponential distribution concentrated on 2, but allowing few trees of depth 1, 3, and 4. We insist that they both reach such conclusion without considering stability issues, but only focusing on accuracy. Further considering stability properties consolidates that growing shallow trees is optimal for rule extraction from tree ensembles.  Table 6: SIRUS error (1-AUC) over a 10-fold cross-validation (averaged over 10 repetitions) when tree depth is limited to 1, 2 or 3. Values within 10% of the minimum are displayed in bold, except for datasets with no significant variations.

Conclusion
Interpretability of learning algorithms is required for applications involving critical decisions, for example the analysis of production processes in the manufacturing industry. Although interpretability does not have a precise definition, we argued that simplicity, stability, and predictivity are minimum requirements. In particular, decision trees and rule algorithms both combine a simple structure and a good accuracy for nonlinear data, and are thus considered as state-of-the-art interpretable algorithms. However, these methods are unstable with respect to data perturbation, which is a strong operational limitation. Therefore, we proposed a new rule algorithm for classification, SIRUS (Stable and Interpretable RUle Set), which takes the form of a short list of rules. We proved that SIRUS considerably improves stability over state-ofthe-art algorithms, while preserving simplicity, accuracy, and computational complexity of top competitors. The principle of SIRUS is to extract rules from a random forest, based on their probability of occurrence in a random tree, and to stop the growing of the forest when the rule selection is converged. Thus, SIRUS inherits the computational complexity of random forests, and has only one tuning parameter. A software implementation, the R/C++ package sirus (Benard and Wright, 2020), is available from CRAN. Besides, we believe that the extension of SIRUS to regression is a promising future research direction: the main challenge is the construction of an appropriate rule aggregation framework to accurately estimate continuous outputs without hurting stability. Furthermore, although SIRUS has the ability to handle high-dimensional data, as illustrated with the SECOM dataset (590 inputs), specific variable selection strategies could be used to reduce the number of possible rules and then improve SIRUS performance.

A.1 Robustness Illustration
For the SECOM dataset used in the experimental Section 5 of the article, only three rule algorithms achieve the same predictivity as random forests: RuleFit, Node harvest, and SIRUS (1-AUC of 0.30, whereas CART and BRL are no better than the random classifier with an error of 1-AUC = 0.5). SIRUS produces a short and stable list of 6 rules, while RuleFit and Node harvest generate complex, long, and unstable rule lists. Rule algorithms based on tree ensembles are stochastic since they rely on the tree randomness Θ 1 , . . . , Θ M . Consequently, RuleFit and Node harvest output different rule lists when run multiple times on the same dataset. Such behavior is a strong limitation in practice, as domain experts become skeptical of the algorithm conclusions. On the other hand, SIRUS is built to have a robust rule extraction mechanism, and the same list of rules is output over multiple repetitions with the same data, as proved in Theorem 2 in the next Section.
To illustrate this, we run each algorithm twice on the SECOM dataset, and display the output models in Figure 6 for SIRUS, Figure 7 for Node harvest, and Figure 8 for RuleFit. We set the regularization parameter of Node harvest and SIRUS as explained in Subsection 5.3 of the article, to maximize accuracy with the smallest possible model: for Node harvest λ = 4, and for SIRUS p 0 = 0.04. RuleFit is tuned as defined in Friedman and Popescu (2008). Figures 7 and 8 show that the rule lists output by RuleFit and Node harvest are quite different across multiple runs with the exact same data, while SIRUS has the same output.
We also observe that for the same accuracy, RuleFit and Node harvest models are longer and more complex than SIRUS. In addition, rules are aggregated using weights to generate predictions. This is not the case for SIRUS, which simply averages the 6 output rules. Finally, we can also mention that manually increasing the regularization of Node harvest, to reduce the model size to 6 rules as in SIRUS, strongly hurts accuracy, which drops to 0.39.

A.2 Additional Competitors
Additional experiments are provided to compare SIRUS to other competitors: C5.0 (Quinlan, 1992) (decision tree), PART (Frank and Witten, 1998), and FOIL (Quinlan and Cameron-Jones, 1995) (classical rule learning algorithms). Model size results are provided in Table 7, stability in Table 8, and error in Table 9. The stability and accuracy improvement of SIRUS is clear.

A.3 Rule Aggregation
In Section 3 of the article,η M,n,p 0 (x)       Table 9: Model error (1-AUC) over a 10-fold cross-validation for UCI datasets (averaged over 10 repetitions). Values within 10% of the minimum are displayed in bold. To tackle our binary classification problem, a natural approach would be to use a logistic regression and define where the coefficients β P have to be estimated. To illustrate the performance of the logistic regression (A.2), we consider again the UCI dataset, "Credit German". We augment the previous results from Figure 4 (in Section 5 of the article) with the logistic regression error in Figure 9. One can observe that the predictive accuracy is slightly improved but it comes at the price of an additional set of coefficients that can be hard to interpret (some can be negative), and an increased computational cost. Notice that categorical variables are one-hot-encoded in this example.

B Stopping Criterion for the Number of Trees M
We recall that the definition of the stopping criterion (5.1) of the forest growing is provided in Section 5 of the main article. First, we provide three groups of experiments to show its good empirical efficiency. In the second subsection, we provide theoretical properties of the stopping criterion.

B.1 Experiments
The following experiments on the UCI datasets show the good empirical performance of the stopping criterion (5.1). Recall that the goal of this criterion is to determine the minimum number of trees M ensuring that two independent fits of SIRUS on the same dataset result  in two lists of rules with an overlap of 95% in average. This is checked with a first batch of experiments-see next paragraph. Secondly, the stopping criterion (5.1) does not consider the optimal p 0 , unknown when trees are grown in the first step of SIRUS. Then, another batch of experiments is run to show that the stability approximation 1 − ε M,n,p 0 is quite insensitive to p 0 . Finally, a last batch of experiments provides examples of the number of trees grown when SIRUS is fit. Notice that for these experiments, categorical variables are one-hot-encoded.
Experiments 1. For each dataset, the following procedure is applied. SIRUS is run a first time using criterion (5.1) to stop the number of trees. This initial run provides the optimal number of trees M as well as the setV M,n of possible p 0 . Then, SIRUS is fit twice independently using the precomputed number of trees M . For each p 0 ∈V M,n , the stability metricŜ M,n,p 0 (with D n = D n ) is computed over the two resulting lists of rules. Finallŷ S M,n,p 0 is averaged across all p 0 values inV M,n . This procedure is repeated 10 times: results are averaged and presented in Table 10, with standard deviations in parentheses. Across the considered datasets, resulting values range from 0.941 to 0.955, and are thus close to 0.95 as expected by construction of criterion (5.1).
Experiments 2. The second type of experiments illustrates that ε M,n,p 0 is quite insensitive to p 0 when M is set with criterion (5.1). For the "Credit German" dataset, we fit SIRUS and then compute 1 − ε M,n,p 0 for each p 0 ∈V M,n . Results are displayed in Figure 10. 1 − ε M,n,p 0 ranges from 0.90 to 1, where the extreme values are reached for p 0 corresponding to very small number of rules, which are not of interest when p 0 is selected to maximize predictive accuracy. Thus, 1 − ε M,n,p 0 is quite concentrated around 0.95 when p 0 varies.
Experiments 3. Finally, we display in Table 11 the optimal number of trees when the growing of SIRUS is stopped using criterion (5.1). It ranges from 4220 to 20 650 trees. In Breiman's forests, the number of trees above which the accuracy cannot be significantly improved is typically 10 times lower. However SIRUS grows shallow trees, and is thus not computationally more demanding than random forests overall.

B.2 Theoretical Properties
We emphasize that growing more trees does not improve predictive accuracy or stability with respect to data perturbation for a fixed sample size n. Indeed, the instability of the rule selection is generated by the variance of the estimatesp M,n (P), P ∈ Π. Upon noting that we have two sources of randomness-Θ and D n -, the law of total variance shows that V[p M,n (P)] can be broken down into two terms: the variance generated by the Monte Carlo randomness Θ on the one hand, and the sampling variance on the other hand. In fact, equation (C.3) in the proof of Theorem 1 below reveals that The stopping criterion (5.1) ensures that the first term becomes negligible as M → ∞, so that V[p M,n (P)] reduces to the sampling variance V[p n (P)], which is independent of M . Therefore, stability with respect to data perturbation cannot be further improved by increasing the number of trees. Additionally, the trees are only involved in the selection of the paths. For a given set of pathsP M,n,p 0 , the construction of the final aggregated estimateη M,n,p 0 (see Section 3 of the article) is independent of the forest. Thus, if further increasing the number of trees does not impact the path selection, neither it improves the predictive accuracy.
Next, Theorem 2 states that conditionally on D n and with D n = D n ,Ŝ M,n,p 0 should be close to 1, and also provides an asymptotic approximation of E[Ŝ M,n,p 0 |D n ] for large values of   , where Φ(M p 0 , M, p n (P)) is the cdf of a binomial distribution with parameter p n (P), M trials, evaluated at M p 0 , and, for all P, P ∈ Π, σ n (P) = p n (P)(1 − p n (P)), and ρ n (P, P ) = Cov(1 P∈T (Θ,Dn) , 1 P ∈T (Θ,Dn) |D n ) σ n (P)σ n (P ) .
The proof of Theorem 2 is to be found in Section D. The equivalent provided in Theorem 2 is defined when the sets of rulesP M,n,p 0 andP M,n,p 0 are not post-treated. It considerably simplifies the analysis of the asymptotic behavior of E[Ŝ M,n,p 0 |D n ]. Since the post-treatment is deterministic, this operation is not an additional source of instability. Then, if the estimation of the rule set without post-treatment is stable, it is also the case when the post-treatment is added. Finally, despite its apparent complexity, the asymptotic approximation of 1 − E[Ŝ M,n,p 0 |D n ] can be easily estimated, and an efficient stopping criterion for the number of trees is therefore derived in (5.1).

C Proof of Theorem 1
We recall Assumptions (A1)-(A3) and Theorem 1 for the sake of clarity.
(A1) The subsampling rate a n satisfies lim n→∞ a n = ∞ and lim First, we prove Theorem 1 for a path of one split. The proof is extended for a path of two splits in the next subsection and follows the same steps. Finally, the proof can be easily extended to a path of any depth d ∈ N by recursion.
Before proving Theorem 1, we state five lemmas (Lemma 1 to Lemma 5). Their proof can be found in the Subsection C.3. Lemma 1 is a preliminary technical result used to state both Lemmas 2 and 4 -case (b).
Lemma 1. Let X be a random variable distributed on R p such that Assumptions (A1) and (A3) are satisfied. Then, for all j ∈ {1, . . . , p} and all r ∈ {1, . . . , q − 1}, we have lim n→∞ √ a n P q (j) r ≤ X (j) <q (j) n,r = 0 and lim n→∞ √ a n P q (j) n,r ≤ X (j) < q (j) Lemma 2 is used to prove both consistency (Lemma 3) and convergence rate (Lemma 4) of the CART-splitting criterion when the root node of the tree is cut at an empirical quantile. Lemma 5 is an intermediate result to prove Theorem 1.
When splitting a node, if the theoretical CART-splitting criterion has multiple maxima, one is randomly selected. This random selection follows a discrete probability law, which is not necessarily uniform and is based on P X,Y as specified in Definition 1. In order to derive the limit of the probability that a given split occurs in a Θ-random tree in the empirical algorithm, one needs to assess the convergence rate of the empirical CART-splitting criterion when it has multiple maxima.
where V is a Gaussian vector of covariance matrix Cov [Z]. If C 1 is explicitly written C 1 = {(j k , r k )} k=1,...,c 1 , Z is defined, for k ∈ {1, . . . , c 1 }, by r k ), and h P 1 is a multivariate quadratic form defined as . .
Besides, the variance of each component of h P 1 (V) is strictly positive.
Definition 1 (Theoretical splitting procedure). Let θ be the set of eligible variables to split the root node of a theoretical random tree. The set of best theoretical cuts at the root node is defined as 1 ) has multiple elements, then (j 1 , r 1 ) is randomly drawn with probability where Φ θ (V ) 1 ,(j 1 ,r 1 ) is the cdf of the limit law defined in Lemma 4 for C 1 = C 1 (θ . This definition is extended for the second split in Definition 2. Recall that the randomness in a tree can be decomposed as Θ = (Θ (S) , Θ (V ) ), where Θ (S) corresponds to the subsampling and Θ (V ) is related to the variable selection. Θ (V ) takes values in the finite set Ω (V ) = {1, . . . , p} 3×mtry .
To finish the proof, we just have to show that lim n→∞ V[p Mn,n (P 1 )] = 0.
The law of total variance gives Following the approach of Mentch and Hooker (2016) Finally, which concludes the proof.
The path P 2 = {(j 1 , r 1 , s 1 ), (j 2 , r 2 , s 2 )} can occur in trees where the split at the root node is (j 1 , r 1 ) and the split of one of the child node is (j 2 , r 2 ), and in trees where the splits are made in the reversed order, (j 2 , r 2 ) at the root node and (j 1 , r 1 ) at one of the child node. Since these two events are disjoint, P P 2 ∈ T (Θ, D n ) is the sum of the probability of these two events. Without loss of generality, we will consider in the entire proof that the split at the root node is (j 1 , r 1 ). Lemmas 6 -9 below extend Lemmas 2 -5 to the case where the tree path contains two splits.
We need to introduce additional notations, first, the theoretical hyperrectangle based on a path P by with d ∈ {1, 2}, the empirical counterpart ofĤ n (P) defined in (2.3). Furthermore, since from assumption (A3), X has a strictly positive density, then for j ∈ {1, ..., p} \ j 1 , and r ∈ {1, ..., q − 1}, P X ∈ H (P 1 ), X (j) < q (j) r > 0 and P X ∈ H (P 1 ), X (j) ≥ q (j) r > 0. When j = j 1 , the second cut is performed along the same direction as the first one. In that case, depending on the side s 1 of the first cut and the cut positions r 1 and r, one of the two child node can be empty with probability one. For example, the hyperrectangle associated to the path {(1, 2, L), (1, 3, R)} is empty. In SIRUS, such splits are not considered to find the best cut for a node at the second level of the tree. Thus we define C P 1 the set of possible splits for the second cut when the split directions are restricted to θ (V ) 2 ⊂ {1, ..., p}.
Besides, the variance of each component of h P 2 (V) is strictly positive.
(c) If L C 1 > 0 and L C 2 = 0, then we have a n L where V is a gaussian vector of covariance matrix Cov [Z], and Z is defined as, for k ∈ J 1 , , and h P 2 is a multivariate quadratic form defined as Besides, the variance of each component of h P 2 (V) is strictly positive.
Definition 2 (Theoretical splitting procedure at children nodes). Let θ (V ) = (θ 2 , ·) ∈ Ω (V ) be the sets of eligible variables to split the nodes of a theoretical random tree. The set of best theoretical cuts at the left children node along the variables in θ (V ) 2 is defined as has multiple elements, then (j 2 , r 2 ) is randomly drawn with probability where P P 1 ∈ T (Θ)|Θ (V ) = θ (V ) is defined from Definition 1, and Φ P 1 ,θ (V ) ,(j 2 ,r 2 ) is the cdf of the limit law defined in Lemma 8 for Lemma 9. If Assumptions (A1)-(A3) are satisfied, then for all θ (V ) ∈ Ω (V ) , we have Finally, the proof of Theorem 1 in the two-splits scenario is the same as in the single-split scenario.
We adapt an inequality from Serfling (2009) (section 2.3.2 page 75) to bound the following conditional probability for all ε > 0 Since f is continuous and strictly positive, there exists three constants c 1 , c 2 , η > 0 such that for all x ∈ [q (j) r , q (j) r + η], c 1 ≤ f (j) (x) ≤ c 2 . Thus, for all ε < η, we have For n > 1 + 1 c 1 ε , we can apply Hoeffding inequality to C.6, where C = e 2c 1 η(1+2c 1 η) . By definition, we have To bound the previous integral, we break it down in three parts. Since f (j) is bounded by c 2 on [q (j) r , q (j) r + η], for n > 1 + 1 c 1 η we use inequality C.7 to get In the second integral, we introduce the following change of variable u = √ and therefore we can write √ a n P q (j) From Assumption (A1), lim n→∞ an n = 0, and then lim n→∞ √ a n P q (j) r ≤ X (j) 1 <q (j) n,r = 0.
The case lim n→∞ √ a n P q Proof of Lemma 2. Let j ∈ {1, ..., p}, r ∈ {1, ..., q − 1}, and H ⊆ R p such that P X ∈ H, X (j) < q (j) r > 0 and P X ∈ H, X (j) ≥ q (j) r > 0. Let where, for a generic hyperrectangle H, we define N n (H) = an i=1 1 X i ∈H , and with the convention Y H L = 0 if H L is empty. The theoretical quantities H L and Y H L are defined similarly by replacing the empirical quantile by its population version. We define symmetrically Simple calculations show that The first term in equation (C.8) can be rewritten as √ a n where W (j) n,r = √ a n a 3 n an i,k,l=1 A close inspection of the terms inside the sum of (C.10) reveals that which tends to zero, according to Lemma 1. Thus, in probability, Regarding the remaining terms in inequality (C.9), by the law of large numbers, in probability, lim n→∞ N n (H) a n = P X ∈ H , lim n→∞ N n (H L ) a n = P X ∈ H L . (C.12) Additionally, which tends to zero, according to Lemma 1. Therefore, in probability, Since P(X ∈ H) > 0 and P(X ∈ H L ) > 0 by assumption, we can combine (C.11)-(C.13) to obtain, in probability, (C.14) Using (C.11) and (C.14) and inequality (C.9), we obtain, in probability, Similar results can be derived for the other term in equation (C.8), which allows us to conclude that, in probability, lim n→∞ √ a n L an H,q (j) n,r − L an H, q (j) r = 0.
Case (a): L C 1 > 0 We first consider the following decomposition for (j, r) ∈ C 1 , L an R p ,q (j) n,r = L an R p , q (j) r + L an R p ,q (j) n,r − L an R p , q (j) R,r symmetrically), the first two terms of the last decomposition are standard variance estimates and we can write Using the Central limit theorem, in probability, lim n→∞ √ a n N L,r a n Y L,r − µ (j) L,r The same result holds for the second term of (C.17), and using Lemma 2 for the third term of (C.17), we get that, in probability, lim n→∞ √ a n L an R p ,q (j) n,r − L an R p , q (j) r = 0.
Using Equation (C.16), each component of L (C 1 ) n,P 1 writes, with (j, r) ∈ C 1 \ (j 1 , r 1 ), We can apply the multivariate Central limit theorem and Slutsky's theorem to obtain, √ a n L where for all (j, r), (j , r ) ∈ C 1 \ (j 1 , r 1 ), each element of the covariance matrix Σ is defined by Since L C 1 > 0, we have for all (j, r) ∈ C 1 , µ L,r = µ (j) R,r . Besides, according to assumption (A3), X has a strictly positive density. Consequently, the variance of Z j,r is strictly positive. This concludes the first case.
Then, simple calculations show that L an R p ,q (j) n,r writes By the law of large numbers, lim converges in distribution to a normal distribution by the Central limit theorem, lim n→∞ a n a n N n,L N 2 Furthermore, as for Equation (C.10) in the proof of Lemma 2, √ a n |Y L − Y L | ≤ a 2 n N n,L N n,L √ a n a 2 n an i=1,l=1 According to Lemma 1, the right hand side term converges to 0. Then, in probability, lim , and then, in probability, The second term of a n R (j) and √ a n (Y L − µ) converges to a normal random variable from the central limit theorem. By Slutsky theorem, in probability, lim n→∞ −a n × 2 Finally for the third term of a n R (j) L,r we also use equation C.19 to conclude that in probability lim n→∞ a n × N n,L a n Consequently, lim n→∞ a n R (j) L,r = 0.
, and Z is defined as, for k ∈ {1, ..., c 1 }, with the simplified notations p L,k = p (j k ) L,r k and p R,k = p (j k ) R,r k . Finally, since lim n→∞ R n,P 1 = 0 in probability, from Slutsky's theorem and the continuous mapping theorem, a n L (C 1 ) n,P 1 D −→ n→∞ h P 1 (V). Note that, since X has a strictly positive density, each component of h P 1 (V) has a strictly positive variance.
Proof of Lemma 5. Consider a path P = (j 1 , r 1 , ·). Set θ (V ) = (θ (V ) 1 , ·, ·) ∈ Ω (V ) , a realization of the randomization of the split direction. Recalling that the best split in a random tree is the one maximizing the CART-splitting criterion, condition on Θ (V ) = θ (V ) , (C.20) We recall that, given θ (V ) , we define the set of best theoretical cuts along the variables in θ

58
Obviously if (j 1 , r 1 ) / ∈ θ (V ) 1 × {1, ..., q − 1}, the probability to select P 1 in the empirical and theoretical tree is null. In the sequel, we assume that (j 1 , r 1 ) ∈ θ (V ) 1 × {1, ..., q − 1} and distinguish between four cases: (j 1 , r 1 ) is not among the best theoretical cuts C 1 θ with a positive value of the theoretical CART-splitting criterion, or finally, is one element of C 1 θ (V ) 1 that all have a null value of the theoretical CART-splitting criterion.
Using the same reasoning as in Case 2, we get We define the random vector L (C 1 ) n,P 1 where each component is the difference between the empirical CART-splitting criterion for the splits (j, r) ∈ C 1 \ (j 1 , r 1 ) and (j 1 , r 1 ), L an R p ,q (j 1 ) n,r 1 > L an R p ,q (j) n,r = P L (C 1 ) n,P 1 < 0 From Lemma 4 (case (a)), √ a n L where for all (j, r), (j , r ) ∈ C 1 \ (j 1 , r 1 ), each element of the covariance matrix Σ is defined by , and the variance of Z j,r is strictly positive. If Φ θ (V ) 1 ,(j 1 ,r 1 ) is the c.d.f. of the multivariate normal distribution of covariance matrix Σ, we can conclude 1 ,(j,r) (0) = 1.
According to Definition 1, in the theoretical random forest, if C 1 θ (V ) 1 has multiple elements, (j 1 , r 1 ) is randomly drawn with probability We can notice that, in the specific case where C 1 θ has two elements, they are both selected with equal probability 1 2 . For more than two elements, the weights are not necessary equal, it depends on the covariance matrix Σ.
Case 4 We assume that all candidate splits have a null value for the theoretical CARTsplitting criterion, i.e. for (j, r) ∈ θ According to Lemma 4 (case (b)), a n L is explicitly written , and h P 1 is a multivariate quadratic form defined as . .
Since (j, r) ∈ C P 1 , P X ∈ H (P 1 )|X (j) < q (j) r > 0 and P X ∈ H (P 1 )|X (j) ≥ q (j) r > 0. Then, we can directly apply Lemma 2 to the first term of this decomposition, which shows that, in probability lim n→∞ √ a n L an H (P 1 ),q (j) n,r − L an H (P 1 ), q (j) r = 0.
L an Ĥ n (P 1 ),q (j) n,r =L an H (P 1 ), q (j) r + L an Ĥ n (P 1 ),q (j) n,r − L an H (P 1 ), q (j) r (C.25) According to Lemma 6, the second term in equation (C.25) converges to 0 in probability. From the law of large numbers, in probability, lim n→∞ L an H (P 1 ), q (j) r = L H (P 1 ), q (j) r , which concludes the proof.
Proof of Lemma 8. Similar to the case with P 1 (Lemma 3), where Lemma 6 is used instead of Lemma 2.
Proof of Lemma 9. Consider a path P 2 = {(j 1 , r 1 , L), (j 2 , r 2 , ·)}. Set θ (V ) = θ , a realization of the randomization of the split directions at the root node and its left child node. Then, θ (V ) 1 and θ (V ) 2 denote the set of eligible variables for respectively the first and second split. We also consider C P 1 θ (V ) 2 ⊂ C P 1 the set of eligible second splits.
Case 5 We assume that (j 1 , r 1 ) ∈ C 1 θ (V ) 1 and (j 2 , r 2 ) ∈ C 2 θ (V ) 2 , and that the theoretical CART-splitting criterion is null for both splits: L R p , q (j 1 ) r 1 = 0 and L H (P 1 ), q (j 2 ) r 2 = 0. . Using the same notations defined in Case 4, we have by definition p L,k = P X (j k ) < q (j k ) r k , X ∈ H l , p R,k = P X (j k ) ≥ q (j k ) r k , X ∈ H l . h P 2 is a multivariate quadratic form defined as h P 2 : . .
The same reasoning than for Cases 4 and 5 applies where the limit law of L (C 1 ,C 2 ) n,P 2 has both gaussian and χ-square components and is given by case (c) or case (d) of Lemma 8.

D Proof of Theorem 2
We recall Theorem 2 for the sake of clarity.
Let p 0 ∈ [0, max U n ) \ U n and D n = D n . Before proving Theorem 2, we need the following two lemmas.
Lemma 11. Let P ∈ Π. For all P ∈ Π such that p n (P) > p 0 , we have .
Symmetrically, for all P ∈ Π such that p n (P) ≤ p 0 , we have .
We are now in a position to prove Theorem 2.
Proof of Theorem 2. The first statement, identity (D.1), is proved similarly to Corollary 2, using the law of large numbers instead of Theorem 1. For the second statement, we first recall that, by definition, We can then derive the limit of the following ratio lim |λ|→∞ F (a + ελ, b, c + λ, z) F (1 + ελ, b, 1 + λ, z) = 1 (D.3) We use D.3 in the specific case where b = 1, a, c ∈ Z, ε = 1 1−p 0 > 1, z = 1 − p n (P) for P ∈ Π such that p n (P) > p 0 (and then zε < 1), and λ = M (1 − p 0 ), if follows that Proof of lemma 11. Fix D n . Let P , P ∈ Π. In what follows, when there is no ambiguity, we will replace T (Θ, D n ) by T n (Θ) to lighten notations.
Using standard formulas, Φ can be expressed with the incomplete beta function, Then, we can express the cdf of the binomial distribution using the hypergeometric function, and it follows P P ∈ T n (Θ 1 )|p M,n (P) ≤ p 0 , D n (D.11) The proof for the case P p M,n P > p 0 |p M,n (P) > p 0 , D n is similar.