Abstract
Let $\bar{X}$ be the mean of a random sample from a distribution which is symmetric about its unknown mean $\mu$ and has known variance $\sigma^2$. The classical method of constructing a hypothesis test or confidence interval for $\mu$ is to use the normal approximation to $n^{\frac{1}{2}}(\bar{X} - \mu)/\sigma$. In order to make this procedure more robust, we might lightly trim the mean by removing extremes from the sample. It is shown that this procedure can greatly improve the rate of convergence in the central limit theorem, but only if the new mean is rescaled in a rather complicated way. From a practical point of view, the removal of extreme values does not make the test or confidence interval more robust.
Citation
Peter Hall. "On the Influence of Extremes on the Rate of Convergence in the Central Limit Theorem." Ann. Probab. 12 (1) 154 - 172, February, 1984. https://doi.org/10.1214/aop/1176993380
Information