版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Sydney Fac Engn Sch Comp Sci Sydney NSW 2006 Australia Hong Kong Univ Sci & Technol Hong Kong Generat AI Res & Dev Ctr Hong Kong Peoples R China
出 版 物:《IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING》 (IEEE Trans. Dependable Secure Comput.)
年 卷 期:2025年第22卷第3期
页 面:2519-2532页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
主 题:Federated learning Machine learning algorithms Servers Training data Robustness Training Data models Measurement Collaboration Threat modeling Adversarial attacks and defenses algorithmic fairness in machine learning resilient federated learning
摘 要:Algorithmic fairness, emphasizing that machine learning algorithms should not discriminate against specific demographic groups, has recently gained increasing attention in the context of federated learning. Existing algorithmic fairness algorithms in federated learning primarily focus on mitigating inherent biases within the training data or biases introduced by non-IID data distributions. However, these methods heavily rely on the assumption of trust and cooperation among participants in federated learning, leaving a notable gap in exploring the robustness of algorithmic fairness under adversarial environments. Therefore, our study advocates the exploration of the robustness of algorithmic fairness in federated learning. We first introduce four types of fairness attacks targeting the fairness of federated learning models. Then, we propose FairGuard, an innovative defense framework designed to counter these attacks. FairGuard employs a novel dual-inference algorithm that effectively identifies malicious clients during the global aggregation phase of federated learning. Our comprehensive experiments demonstrate that the introduced fairness attacks significantly compromise the global model s fairness and cannot be effectively mitigated by existing defense mechanisms, while FairGuard exhibits remarkable effectiveness in defending against these fairness attacks.