咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Federated Transfer Learning fo... 收藏

Federated Transfer Learning for On-Device LLMs Efficient Fine Tuning Optimization

作     者:Chuantao Li Bruce Gu Zhigang Zhao Youyang Qu Guomao Xin Jidong Huo Longxiang Gao 

作者机构:Key Laboratory of Computing Power Network and Information SecurityShandong Computer Science Center(National Supercomputer Center in Jinan)Qilu University of Technology(Shandong Academy of Sciences)Jinan 250000Chinaand also with Shandong Provincial Key Laboratory of Computer NetworksJinan 250000China Key Laboratory of Computing Power Network and Information SecurityShandong Computer Science Center(National Supercomputer Center in Jinan)Qilu University of Technology(Shandong Academy of Sciences)Jinan 250000Chinaand also with Department of Computer ScienceFudan UniversityShanghai 200000China TelChina Group Co.Ltd.Jinan 250000China Key Laboratory of Computing Power Network and Information SecurityShandong Computer Science Center(National Supercomputer Center in Jinan)Qilu University of Technology(Shandong Academy of Sciences)Jinan 250000Chinaand also with College Oceanography and Space InformaticsChina University of Petroleum(East China)Qingdao 266000China 

出 版 物:《Big Data Mining and Analytics》 (大数据挖掘与分析(英文))

年 卷 期:2025年第8卷第2期

页      面:430-446页

核心收录:

学科分类:12[管理学] 1201[管理学-管理科学与工程(可授管理学、工学学位)] 081104[工学-模式识别与智能系统] 08[工学] 0835[工学-软件工程] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:supported by the Key R&D Program of Shandong Province(No.2024CXGC010111) the Jinan Science and Technology Bureau(No.202333043) the Shandong Provincial Natural Science Foundation(Nos.ZR2021LZH008 and ZR202211150015) the Taishan Scholars Program(No.TSQNZ20230621) 

主  题:Federated learning fine-tuning deep learning Large Language Models(LLMs) 

摘      要:The proliferation of Large Language Models(LLMs)has catalyzed the growth of various *** is therefore imperative to ensure the controlled and beneficial application of LLMs across specific domains for downstream tasks through transfer learning,while preserving their general *** propose a novel and on-device efficient fine-tuning optimization algorithm for LLMs,utilizing federated transfer ***,we introduce the Fusion of low Rank Adaptation(FoRA)optimization algorithm from a micro perspective,which enhances multi-dimensional feature aggregation through the addition of efficient *** a meso perspective,we extend the application of the FoRA algorithm across all linear layers within the Transformer architecture to facilitate downstream task ***,from a macro perspective and with a focus on the medical domain,we incorporate quantization techniques into the federated learning framework to achieve on-device efficient fine-tuning optimization,thereby offering dual protection for data and model *** results indicate that,compared to existing state-of-the-art methods,our algorithm significantly improves LLM performance while ensuring dual privacy protection of both data and models.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分