Translator Disclaimer
March 2015 Bayesian nonparametric cross-study validation of prediction methods
Lorenzo Trippa, Levi Waldron, Curtis Huttenhower, Giovanni Parmigiani
Ann. Appl. Stat. 9(1): 402-428 (March 2015). DOI: 10.1214/14-AOAS798

Abstract

We consider comparisons of statistical learning algorithms using multiple data sets, via leave-one-in cross-study validation: each of the algorithms is trained on one data set; the resulting model is then validated on each remaining data set. This poses two statistical challenges that need to be addressed simultaneously. The first is the assessment of study heterogeneity, with the aim of identifying a subset of studies within which algorithm comparisons can be reliably carried out. The second is the comparison of algorithms using the ensemble of data sets. We address both problems by integrating clustering and model comparison. We formulate a Bayesian model for the array of cross-study validation statistics, which defines clusters of studies with similar properties and provides the basis for meaningful algorithm comparison in the presence of study heterogeneity. We illustrate our approach through simulations involving studies with varying severity of systematic errors, and in the context of medical prognosis for patients diagnosed with cancer, using high-throughput measurements of the transcriptional activity of the tumor’s genes.

Citation

Download Citation

Lorenzo Trippa. Levi Waldron. Curtis Huttenhower. Giovanni Parmigiani. "Bayesian nonparametric cross-study validation of prediction methods." Ann. Appl. Stat. 9 (1) 402 - 428, March 2015. https://doi.org/10.1214/14-AOAS798

Information

Published: March 2015
First available in Project Euclid: 28 April 2015

zbMATH: 06446574
MathSciNet: MR3341121
Digital Object Identifier: 10.1214/14-AOAS798

Rights: Copyright © 2015 Institute of Mathematical Statistics

JOURNAL ARTICLE
27 PAGES


SHARE
Vol.9 • No. 1 • March 2015
Back to Top