Translator Disclaimer
July 2013 A class of cross-validatory model selection criteria
Hirokazu Yanagihara, Ke-Hai Yuan, Hironori Fujisawa, Kentaro Hayashi
Hiroshima Math. J. 43(2): 149-177 (July 2013). DOI: 10.32917/hmj/1372180510


In this paper, we define a class of cross-validatory model selection criteria as an estimator of the predictive risk function based on a discrepancy between a candidate model and the true model. For a vector of unknown parameters, $n$ estimators are required for the definition of the class, where $n$ is the sample size. The $i$th estimator $(i=1,\dots,n)$ is obtained by minimizing a weighted discrepancy function in which the $i$th observation has a weight of $1-\lambda$ and others have weight of $1$. Cross-validatory model selection criteria in the class are specified by the individual $\lambda$. The sample discrepancy function and the ordinary cross-validation (CV) criterion are special cases of the class. One may choose $\lambda$ to minimize the biases. The optimal $\lambda$ makes the bias-corrected CV (CCV) criterion a second-order unbiased estimator for the risk function, while the ordinary CV criterion is a first-order unbiased estimator of the risk function.


Download Citation

Hirokazu Yanagihara. Ke-Hai Yuan. Hironori Fujisawa. Kentaro Hayashi. "A class of cross-validatory model selection criteria." Hiroshima Math. J. 43 (2) 149 - 177, July 2013.


Published: July 2013
First available in Project Euclid: 25 June 2013

zbMATH: 1294.62134
MathSciNet: MR3072950
Digital Object Identifier: 10.32917/hmj/1372180510

Primary: 62H25
Secondary: 62F07

Rights: Copyright © 2013 Hiroshima University, Mathematics Program


Vol.43 • No. 2 • July 2013
Back to Top