咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Energy-Efficient Split Learnin... 收藏
arXiv

Energy-Efficient Split Learning for Fine-Tuning Large Language Models in Edge Networks

作     者:Li, Zuguang Wu, Shaohua Li, Liang Zhang, Songge 

作者机构:School of Electronics and Information Engineering Harbin Institute of Technology Shenzhen518055 China Frontier Research Center Pengcheng Laboratory Shenzhen518055 China Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology Harbin Institute of Technology Shenzhen518055 China School of Electronic and Computer Engineering Peking University Shenzhen518000 China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

摘      要:In this letter, we propose an energy-efficient split learning (SL) framework for fine-tuning large language models (LLMs) using geo-distributed personal data at the network edge, where LLMs are split and alternately across massive mobile devices and an edge server. Considering the device heterogeneity and channel dynamics in edge networks, a Cut lAyer and computing Resource Decision (CARD) algorithm is developed to minimize training delay and energy consumption. Simulation results demonstrate that the proposed approach reduces the average training delay and server’s energy consumption by 70.8% and 53.1%, compared to the benchmarks, respectively. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分