版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ New Mexico Dept Elect & Comp Engn SECNet Labs Albuquerque NM 87131 USA Texas Tech Univ Dept Comp Sci Lubbock TX 79409 USA Univ North Carolina Chapel Hill Sch Data Sci & Soc Chapel Hill NC 27599 USA
出 版 物:《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 (IEEE Trans Parallel Distrib Syst)
年 卷 期:2025年第36卷第3期
页 面:553-569页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:National Science Foundation [CNS-2148178 OIA-2417062]
主 题:Training Computational modeling Generative adversarial networks Generators Data models Convergence Load modeling Data privacy Adaptation models Accuracy Generative adversarial networks (GANs) federated learning asynchronous molecular discovery
摘 要:Generative Adversarial Networks (GANs) are deep learning models that learn and generate new samples similar to existing ones. Traditionally, GANs are trained in centralized data centers, raising data privacy concerns due to the need for clients to upload their data. To address this, Federated Learning (FL) integrates with GANs, allowing collaborative training without sharing local data. However, this integration is complex because GANs involve two interdependent models-the generator and the discriminator-while FL typically handles a single model over distributed datasets. In this article, we propose a novel asynchronous FL framework for GANs, called AsyncFedGAN, designed to efficiently and distributively train both models tailored for molecule generation. AsyncFedGAN addresses the challenges of training interactive models, resolves the straggler issue in synchronous FL, reduces model staleness in asynchronous FL, and lowers client energy consumption. Our extensive simulations for molecular discovery show that AsyncFedGAN achieves convergence with proper settings, outperforms baseline methods, and balances model performance with client energy usage.