FedGA: Federated Learning with Gradient Alignment for Error Asymmetry Mitigation

Research output: Contribution to conference (unpublished)Paperpeer-review

Abstract

Federated learning (FL) triggers intra-client and inter-client class imbalance, with the latter compared to the former leading to biased client updates and thus deteriorating the distributed models. Such a bias is exacerbated during the server aggregation phase and has yet to be effectively addressed by conventional re-balancing methods. To this end, different from the off-the-shelf label or loss-based approaches, we propose a gradient alignment (GA)-informed FL method, dubbed as FedGA, where the importance of error asymmetry (EA) in bias is observed and its linkage to the gradient of the loss to raw logits is explored. Concretely, GA, implemented by label calibration during the model backpropagation process, prevents catastrophic forgetting of rate and missing classes, hence boosting model convergence and accuracy. Experimental results on five benchmark datasets demonstrate that GA outperforms the pioneering counterpart FedAvg and its four variants in minimizing EA and updating bias, and accordingly yielding higher F1 score and accuracy margins when the Dirichlet distribution sampling factor α increases. The code and more details are available at https://anonymous.4open.science/r/FedGA-B052/README.md.
Original languageEnglish
DOIs
Publication statusPublished - 4 Mar 2025
Event39th AAAI Conference on Artificial Intelligence - Philadelphia Marriott Downtown, Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025
https://aaai.org/conference/aaai/aaai-25/

Conference

Conference39th AAAI Conference on Artificial Intelligence
Abbreviated titleAAAI-2025
Country/TerritoryUnited States
CityPhiladelphia
Period25/02/254/03/25
Internet address

Fingerprint

Dive into the research topics of 'FedGA: Federated Learning with Gradient Alignment for Error Asymmetry Mitigation'. Together they form a unique fingerprint.

Cite this