咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Large Language Model Federated... 收藏
arXiv

Large Language Model Federated Learning with Blockchain and Unlearning for Cross-Organizational Collaboration

作     者:Zuo, Xuhan Wang, Minghao Zhu, Tianqing Yu, Shui Zhou, Wanlei 

作者机构:Faculty of Data Science City University of Macau China School of Computer Science University of Technology Sydney Ultimo2007 Australia 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Federated learning 

摘      要:Large language models (LLMs) have transformed the way computers understand and process human language, but using them effectively across different organizations remains still difficult. When organizations work together to improve LLMs, they face several main challenges. First, organizations hesitate to share their valuable data with others. Second, competition between organizations creates trust problems during collaboration. Third, new privacy laws require organizations to be able to delete specific data when requested, which is especially difficult when multiple organizations are learning from shared data. Traditional federated learning approaches do not address these interconnected challenges, particularly in scenarios where participants cannot fully trust each other or the central aggregator. To overcome these limitations, we propose a hybrid blockchain-based federated learning framework that uniquely combines public and private blockchain architectures with multiagent reinforcement learning. Our framework enables transparent sharing of model update through the public blockchain while protecting sensitive computations in private chains. Each organization operates as an intelligent agent, using Q-learning to optimize its participation strategy and resource allocation, thus aligning individual incentives with collective goals. Notably, we introduce an efficient unlearning mechanism based on Low-Rank Adaptation (LoRA) that enables selective removal of specific data contributions without compromising the model’s overall performance. Through extensive experimentation on real-world datasets, we demonstrate that our framework effectively balances privacy protection, trust establishment, and regulatory compliance while maintaining high model performance. Case studies in healthcare and education sectors validate our approach’s practical applicability in sensitive domains where data privacy and trust are paramount. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分