Abstract
We introduce a new class of methods for finite-sample false discovery rate (FDR) control in multiple testing problems with dependent test statistics where the dependence is known. Our approach separately calibrates a data-dependent p-value rejection threshold for each hypothesis, relaxing or tightening the threshold as appropriate to target exact FDR control. In addition to our general framework, we propose a concrete algorithm, the dependence-adjusted Benjamini–Hochberg (dBH) procedure, which thresholds the BH-adjusted p-value for each hypothesis. Under positive regression dependence, the dBH procedure uniformly dominates the standard BH procedure, and in general it uniformly dominates the Benjamini–Yekutieli (BY) procedure (also known as BH with log correction), which makes a conservative adjustment for worst-case dependence. Simulations and real data examples show substantial power gains over the BY procedure, and competitive performance with knockoffs in settings where both methods are applicable. When the BH procedure empirically controls FDR (as it typically does in practice), the dBH procedure performs comparably.
Funding Statement
William Fithian is supported in part by NSF Grant DMS-1916220 and a Hellman Fellowship from Berkeley.
Acknowledgments
We are grateful to Emmanuel Candès, Patrick Chao, Jonathan Taylor and anonymous reviewers for helpful feedback on a draft of this paper.
Citation
William Fithian. Lihua Lei. "Conditional calibration for false discovery rate control under dependence." Ann. Statist. 50 (6) 3091 - 3118, December 2022. https://doi.org/10.1214/21-AOS2137
Information