December 2024 Statistical inference for decentralized federated learning
Jia Gu, Song Xi Chen
Author Affiliations +
Ann. Statist. 52(6): 2931-2955 (December 2024). DOI: 10.1214/24-AOS2452

Abstract

This paper considers decentralized Federated Learning (FL) under heterogeneous distributions among distributed clients or data blocks for the M-estimation. The mean squared error and consensus error across the estimators from different clients via the decentralized stochastic gradient descent algorithm are derived. The asymptotic normality of the Polyak–Ruppert (PR) averaged estimator in the decentralized distributed setting is attained, which shows that its statistical efficiency comes at a cost as it is more restrictive on the number of clients than that in the distributed M-estimation. To overcome the restriction, a one-step estimator is proposed which permits a much larger number of clients while still achieving the same efficiency as the original PR-averaged estimator in the nondistributed setting. The confidence regions based on both the PR-averaged estimator and the proposed one-step estimator are constructed to facilitate statistical inference for decentralized FL.

Funding Statement

This research is supported by National Natural Science Foundation of China grants 12292980, 12292983 and 92358303.

Citation

Download Citation

Jia Gu. Song Xi Chen. "Statistical inference for decentralized federated learning." Ann. Statist. 52 (6) 2931 - 2955, December 2024. https://doi.org/10.1214/24-AOS2452

Information

Received: 1 May 2024; Revised: 1 September 2024; Published: December 2024
First available in Project Euclid: 18 December 2024

Digital Object Identifier: 10.1214/24-AOS2452

Subjects:
Primary: 62-08
Secondary: 62L12

Keywords: decentralized estimation , decentralized stochastic gradient descent , Federated learning , Heterogeneity , one-step estimation

Rights: Copyright © 2024 Institute of Mathematical Statistics

Vol.52 • No. 6 • December 2024
Back to Top