Consider a testing problem for the null hypothesis $H_0: \theta\in\Theta_0$. The standard frequentist practice is to reject the null hypothesis when the p-value is smaller than a threshold value $\alpha$, usually $0.05$. We ask the question how many of the null hypotheses a frequentist rejects are actually true. Precisely, we look at the Bayesian false discovery rate $\delta_n=P_g(\theta\in\Theta_0|p-value<\alpha)$ under a proper prior density $g(\theta)$. This depends on the prior $g$, the sample size $n$, the threshold value $\alpha$ as well as the choice of the test statistic. We show that the Benjamini--Hochberg FDR in fact converges to $\delta_n$ almost surely under $g$ for any fixed $n$. For one-sided null hypotheses, we derive a third order asymptotic expansion for $\delta_n$ in the continuous exponential family when the test statistic is the MLE and in the location family when the test statistic is the sample median. We also briefly mention the expansion in the uniform family when the test statistic is the MLE. The expansions are derived by putting together Edgeworth expansions for the CDF, Cornish--Fisher expansions for the quantile function and various Taylor expansions. Numerical results show that the expansions are very accurate even for a small value of $n$ (e.g., $n=10$). We make many useful conclusions from these expansions, and specifically that the frequentist is not prone to false discoveries except when the prior $g$ is too spiky. The results are illustrated by many examples.
Digital Object Identifier: 10.1214/074921706000000699