Abstract
In this paper, we define a class of cross-validatory model selection criteria as an estimator of the predictive risk function based on a discrepancy between a candidate model and the true model. For a vector of unknown parameters, $n$ estimators are required for the definition of the class, where $n$ is the sample size. The $i$th estimator $(i=1,\dots,n)$ is obtained by minimizing a weighted discrepancy function in which the $i$th observation has a weight of $1-\lambda$ and others have weight of $1$. Cross-validatory model selection criteria in the class are specified by the individual $\lambda$. The sample discrepancy function and the ordinary cross-validation (CV) criterion are special cases of the class. One may choose $\lambda$ to minimize the biases. The optimal $\lambda$ makes the bias-corrected CV (CCV) criterion a second-order unbiased estimator for the risk function, while the ordinary CV criterion is a first-order unbiased estimator of the risk function.
Citation
Hirokazu Yanagihara. Ke-Hai Yuan. Hironori Fujisawa. Kentaro Hayashi. "A class of cross-validatory model selection criteria." Hiroshima Math. J. 43 (2) 149 - 177, July 2013. https://doi.org/10.32917/hmj/1372180510
Information