咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Chasing Common Knowledge: Join... 收藏

Chasing Common Knowledge: Joint Large Model Selection and Pulling in MEC With Parameter Sharing

作     者:Zhou, Lizhen Xu, Zichuan Xia, Qiufen Xu, Zhou Ren, Wenhao Qi, Wenbo Ma, Jinjing Yan, Song Yang, Yuan 

作者机构:DUT Sch Software Technol Dalian 116024 Liaoning Peoples R China DUT RU Int Sch Informat Sci & Engn Dalian 116024 Liaoning Peoples R China Ant Grp Hangzhou 310000 Zhejiang Peoples R China Alibaba Cloud Hangzhou 310030 Zhejiang Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS》 (IEEE Trans Parallel Distrib Syst)

年 卷 期:2025年第36卷第3期

页      面:437-454页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China (NSFC) [62172068, 62172071] Shandong Provincial Natural Science Foundation [ZR2023LZH008, ZR2023LZH013, ZR2023LZH016] Joint research project with China Coal Research Institute [2022-3-KJHZ003] CCF-Ant Research Fund 

主  题:Computational modeling Accuracy Delays Inference algorithms Heuristic algorithms Costs Adaptation models Context modeling Approximation algorithms Load modeling Mobile edge computing (MEC) model selection model pulling approximation algorithm online learning 

摘      要:Pretrained Foundation Models (PFMs) are regarded as a promising accelerator for the development of various Artificial Intelligence (AI) applications, and have recently been widely fine-tuned to satisfy users personalized inference demands. As many users are attracted to PFM-based AI applications, remote data centers are increasingly unable to solely bear the enormous computational demands and meet the delay requirements of inference requests. Mobile edge computing (MEC) offers a viable solution for delivering low-latency inference services by pulling fine-tuned PFMs from the remote data center to cloudlets in the proximity of users. However, a fine-tuned PFM typically comprises billions of model parameters, which are highly resource-intensive, time-consuming, and cost-prohibitive to execute at the edge. To address this, we investigate a novel joint large model selection and pulling problem in MEC networks. The novelty of our study lies in exploring parameter sharing among fine-tuned PFMs based on their common knowledge. Specifically, we first formulate a Non-Linear Integer Programming (NLIP) for the problem to minimize the total delay of implementing all inference requests. We then transform the NLIP into an equivalent Integer Linear Program (ILP) that is much simpler to solve. We further propose a randomized algorithm with a provable approximation ratio for the problem. We also consider the online version of the problem with uncertain request demand, and develop an online learning algorithm with a bounded regret. The crux of the online algorithm is the adoption of the multi-armed bandit technique with restricted context for dynamic admissions of inference requests. We finally conduct extensive experiments based on real datasets. Experimental results demonstrate that our algorithms reduce at least 38% in total delays and average costs, while achieving a 5% improvement in average accuracies.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分