咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >AsyncFedGAN: An Efficient and ... 收藏

AsyncFedGAN: An Efficient and Staleness-Aware Asynchronous Federated Learning Framework for Generative Adversarial Networks

作     者:Manu, Daniel Alazzwi, Abee Yao, Jingjing Lin, Youzuo Sun, Xiang 

作者机构:Univ New Mexico Dept Elect & Comp Engn SECNet Labs Albuquerque NM 87131 USA Texas Tech Univ Dept Comp Sci Lubbock TX 79409 USA Univ North Carolina Chapel Hill Sch Data Sci & Soc Chapel Hill NC 27599 USA 

出 版 物:《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 (IEEE Trans Parallel Distrib Syst)

年 卷 期:2025年第36卷第3期

页      面:553-569页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Science Foundation [CNS-2148178  OIA-2417062] 

主  题:Training Computational modeling Generative adversarial networks Generators Data models Convergence Load modeling Data privacy Adaptation models Accuracy Generative adversarial networks (GANs) federated learning asynchronous molecular discovery 

摘      要:Generative Adversarial Networks (GANs) are deep learning models that learn and generate new samples similar to existing ones. Traditionally, GANs are trained in centralized data centers, raising data privacy concerns due to the need for clients to upload their data. To address this, Federated Learning (FL) integrates with GANs, allowing collaborative training without sharing local data. However, this integration is complex because GANs involve two interdependent models-the generator and the discriminator-while FL typically handles a single model over distributed datasets. In this article, we propose a novel asynchronous FL framework for GANs, called AsyncFedGAN, designed to efficiently and distributively train both models tailored for molecule generation. AsyncFedGAN addresses the challenges of training interactive models, resolves the straggler issue in synchronous FL, reduces model staleness in asynchronous FL, and lowers client energy consumption. Our extensive simulations for molecular discovery show that AsyncFedGAN achieves convergence with proper settings, outperforms baseline methods, and balances model performance with client energy usage.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分