Abstract
The paper suggests a simple method of deriving minimax lower bounds to the accuracy of statistical inference on heavy tails. A well-known result by Hall and Welsh (Ann. Statist. 12 (1984) 1079–1084) states that if $\hat{\alpha}_{n}$ is an estimator of the tail index $\alpha_{P}$ and $\{z_{n}\}$ is a sequence of positive numbers such that $\sup_{P\in\mathcal{D}_{r}}\mathbb{P}(|\hat{\alpha}_{n}-\alpha_{P}|\ge z_{n})\to0$, where $\mathcal{D}_{r}$ is a certain class of heavy-tailed distributions, then $z_{n}\gg n^{-r}$. The paper presents a non-asymptotic lower bound to the probabilities $\mathbb{P}(|\hat{\alpha}_{n}-\alpha_{P}|\ge z_{n})$. We also establish non-uniform lower bounds to the accuracy of tail constant and extreme quantiles estimation. The results reveal that normalising sequences of robust estimators should depend in a specific way on the tail index and the tail constant.
Citation
S.Y. Novak. "Lower bounds to the accuracy of inference on heavy tails." Bernoulli 20 (2) 979 - 989, May 2014. https://doi.org/10.3150/13-BEJ512