algorithmicfairness, emphasizing that machinelearning algorithms should not discriminate against specific demographic groups, has recently gained increasing attention in the context of federated learning. Existing a...
详细信息
algorithmicfairness, emphasizing that machinelearning algorithms should not discriminate against specific demographic groups, has recently gained increasing attention in the context of federated learning. Existing algorithmicfairness algorithms in federated learning primarily focus on mitigating inherent biases within the training data or biases introduced by non-IID data distributions. However, these methods heavily rely on the assumption of trust and cooperation among participants in federated learning, leaving a notable gap in exploring the robustness of algorithmicfairness under adversarial environments. Therefore, our study advocates the exploration of the robustness of algorithmicfairness in federated learning. We first introduce four types of fairness attacks targeting the fairness of federated learning models. Then, we propose FairGuard, an innovative defense framework designed to counter these attacks. FairGuard employs a novel dual-inference algorithm that effectively identifies malicious clients during the global aggregation phase of federated learning. Our comprehensive experiments demonstrate that the introduced fairness attacks significantly compromise the global model's fairness and cannot be effectively mitigated by existing defense mechanisms, while FairGuard exhibits remarkable effectiveness in defending against these fairness attacks.
暂无评论