Open Access
December 2018 Tree ensembles with rule structured horseshoe regularization
Malte Nalenz, Mattias Villani
Ann. Appl. Stat. 12(4): 2379-2408 (December 2018). DOI: 10.1214/18-AOAS1157

Abstract

We propose a new Bayesian model for flexible nonlinear regression and classification using tree ensembles. The model is based on the RuleFit approach in Friedman and Popescu [Ann. Appl. Stat. 2 (2008) 916–954] where rules from decision trees and linear terms are used in a L1-regularized regression. We modify RuleFit by replacing the L1-regularization by a horseshoe prior, which is well known to give aggressive shrinkage of noise predictors while leaving the important signal essentially untouched. This is especially important when a large number of rules are used as predictors as many of them only contribute noise. Our horseshoe prior has an additional hierarchical layer that applies more shrinkage a priori to rules with a large number of splits, and to rules that are only satisfied by a few observations. The aggressive noise shrinkage of our prior also makes it possible to complement the rules from boosting in RuleFit with an additional set of trees from Random Forest, which brings a desirable diversity to the ensemble. We sample from the posterior distribution using a very efficient and easily implemented Gibbs sampler. The new model is shown to outperform state-of-the-art methods like RuleFit, BART and Random Forest on 16 datasets. The model and its interpretation is demonstrated on the well known Boston housing data, and on gene expression data for cancer classification. The posterior sampling, prediction and graphical tools for interpreting the model results are implemented in a publicly available R package.

Citation

Download Citation

Malte Nalenz. Mattias Villani. "Tree ensembles with rule structured horseshoe regularization." Ann. Appl. Stat. 12 (4) 2379 - 2408, December 2018. https://doi.org/10.1214/18-AOAS1157

Information

Received: 1 February 2017; Revised: 1 February 2018; Published: December 2018
First available in Project Euclid: 13 November 2018

zbMATH: 07029459
MathSciNet: MR3875705
Digital Object Identifier: 10.1214/18-AOAS1157

Keywords: Bayesian , ‎classification‎ , decision trees , interpretation , MCMC , Nonlinear regression , prediction

Rights: Copyright © 2018 Institute of Mathematical Statistics

Vol.12 • No. 4 • December 2018
Back to Top