The rapid progression in autonomous driving and vehicle networking technologies has catalyzed the emergence of advanced vehicle applications, aiming to augment traffic safety and the driving experience. This advanceme...
详细信息
ISBN:
(纸本)9798350361261;9798350361278
The rapid progression in autonomous driving and vehicle networking technologies has catalyzed the emergence of advanced vehicle applications, aiming to augment traffic safety and the driving experience. This advancement, however, is challenged by the limited computational and storage capacities inherent in on-board vehicle systems. To address this, Vehicle edgecomputing (VEC) emerges as a pivotal solution, enhancing vehicular computational capabilities. In this context, we introduce a novel VEC task offloading model utilizing Deep Reinforcement Learning (DRL). This model leverages the otherwise idle computational resources available in vehicles to facilitate efficient edgecomputing offloading within heterogeneous networks. A key innovation in our approach is the integration of Reinforcement Learning (RL) with Deep Learning (DL), significantly improving the convergence efficiency of the system. We also introduce an enhanced Q-learning algorithm tailored to jointly address the task offloading and processing challenges in VEC. This algorithm is adept at making optimal offloading decisions, aiming to minimize the overall system cost, encompassing both latency and energy consumption. Through rigorous simulation, our results demonstrate that this improved Q-learning approach substantially reduces total system costs while concurrently improving the quality of service in VEC environments. Our study not only offers a robust framework for computation offloading in vehicular networks but also paves the way for future research in AI-driven vehicular technology optimizations.
暂无评论