咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Adaptive Contextual Caching fo... 收藏
arXiv

Adaptive Contextual Caching for Mobile Edge Large Language Model Service

作     者:Liu, Guangyuan Liu, Yinqiu Wang, Jiacheng Du, Hongyang Niyato, Dusit Kang, Jiawen Xiong, Zehui 

作者机构:College of Computing and Data Science The Energy Research Institute @ NTU Interdisciplinary Graduate Program Nanyang Technological University Singapore College of Computing and Data Science Nanyang Technological University Singapore Department of Electrical and Electronic Engineering University of Hong Kong Hong Kong School of Automation Guangdong University of Technology China Pillar of Information Systems Technology and Design Singapore University of Technology and Design Singapore 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2025年

核心收录:

主  题:Deep reinforcement learning 

摘      要:Mobile edge Large Language Model (LLM) deployments face inherent constraints, such as limited computational resources and network bandwidth. Although Retrieval-Augmented Generation (RAG) mitigates some challenges by integrating external knowledge bases, inefficient cache management can still result in high retrieval latency and frequent cache updates. To address these issues, we propose an Adaptive Contextual Caching (ACC) framework that anticipates user needs by proactively caching semantically relevant data for mobile-edge LLMs. ACC utilizes a deep reinforcement learning (DRL) module to refine cache replacement policies, balancing user context, document similarity, and the overhead associated with cache misses. Experimental results demonstrate that ACC increases cache hit rates to over 80% after only 11 training episodes, outperforming FIFO, LRU, and semantic-only caching while reducing retrieval latency by up to 40%. In particular, ACC also reduces local caching overhead (i.e., the cost of updating the cache when a miss occurs) by as much as 55%, enabling scalable, low-latency LLM services in resource-constrained edge environments. © 2025, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分