Abstract
The asymptotic risk of efficient estimators with Kullback–Leibler loss in smoothly parametrized statistical models is $k/2_n$, where $k$ is the parameter dimension and $n$ is the sample size. Under fairly general conditions, we given a simple information-theoretic proof that the set of parameter values where any arbitrary estimator is superefficient is negligible. The proof is based on a result of Rissanen that codes have asymptotic redundancy not smaller than $(k/2)\log n$, except in a set of measure 0.
Citation
Andrew Barron. Nicolas Hengartner. "Information theory and superefficiency." Ann. Statist. 26 (5) 1800 - 1825, October 1998. https://doi.org/10.1214/aos/1024691358
Information