Abstract
In many applications, we have access to the complete dataset but are only interested in the prediction of a particular region of predictor variables. A standard approach is to find the globally best modeling method from a set of candidate methods. However, it is perhaps rare in reality that one candidate method is uniformly better than the others. A natural approach for this scenario is to apply a weighted loss in performance assessment to reflect the region-specific interest. We propose a targeted cross-validation (TCV) to select models or procedures based on a general weighted loss. We show that the TCV is consistent in selecting the best performing candidate under the weighted loss. Experimental studies are used to demonstrate the use of TCV and its potential advantage over the global CV or the approach of using only local data for modeling a local region.
Previous investigations on CV have relied on the condition that when the sample size is large enough, the ranking of two candidates stays the same. However, in many applications with the setup of changing data-generating processes or highly adaptive modeling methods, the relative performance of the methods is not static as the sample size varies. Even with a fixed data-generating process, it is possible that the ranking of two methods switches infinitely many times. In this work, we broaden the concept of the selection consistency by allowing the best candidate to switch as the sample size varies, and then establish the consistency of the TCV. This flexible framework can be applied to high-dimensional and complex machine learning scenarios where the relative performances of modeling procedures are dynamic.
Acknowledgement
The authors sincerely thank three anonymous reviewers and the Associate Editor for their constructive and very helpful comments, which led to a substantial improvement of the work. This paper is based upon work supported by the US National Science Foundation under grant number ECCS-2038603.
Citation
Jiawei Zhang. Jie Ding. Yuhong Yang. "Targeted cross-validation." Bernoulli 29 (1) 377 - 402, February 2023. https://doi.org/10.3150/22-BEJ1461