Open Access
2022 Statistical learning from biased training samples
Stephan Clémençon, Pierre Laforgue
Author Affiliations +
Electron. J. Statist. 16(2): 6086-6134 (2022). DOI: 10.1214/22-EJS2084

Abstract

With the deluge of digitized information in the Big Data era, massive datasets are becoming increasingly available for learning predictive models. However, in many practical situations, the poor control of the data acquisition processes may naturally jeopardize the outputs of machine learning algorithms, and selection bias issues are now the subject of much attention in the literature. The present article investigates how to extend Empirical Risk Minimization, the principal paradigm in statistical learning, when training observations are generated from biased models, i.e., from distributions that are different from that in the test/prediction stage, and absolutely continuous with respect to the latter. Precisely, we show how to build a “nearly debiased” training statistical population from biased samples and the related biasing functions, following in the footsteps of the approach originally proposed in [46]. Furthermore, we study from a nonasymptotic perspective the performance of minimizers of an empirical version of the risk computed from the statistical population thus created. Remarkably, the learning rate achieved by this procedure is of the same order as that attained in absence of selection bias. Beyond the theoretical guarantees, we also present experimental results supporting the relevance of the algorithmic approach promoted in this paper.

Citation

Download Citation

Stephan Clémençon. Pierre Laforgue. "Statistical learning from biased training samples." Electron. J. Statist. 16 (2) 6086 - 6134, 2022. https://doi.org/10.1214/22-EJS2084

Information

Received: 1 October 2021; Published: 2022
First available in Project Euclid: 22 November 2022

MathSciNet: MR4515716
zbMATH: 07633934
Digital Object Identifier: 10.1214/22-EJS2084

Subjects:
Primary: 62C12
Secondary: 62D99

Keywords: bias sampling models , learning under sample selection bias , nonasymptotic generalization bounds , statistical learning theory

Vol.16 • No. 2 • 2022
Back to Top