咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >A Reinforcement Learning Appro... 收藏

A Reinforcement Learning Approach to Price Cloud Resources With Provable Convergence Guarantees

作     者:Xie, Hong Lui, John C. S. 

作者机构:Chongqing Univ Coll Comp Sci Chongqing 400044 Peoples R China Chinese Univ Hong Kong Dept Comp Sci & Engn Hong Kong Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 (IEEE Trans. Neural Networks Learn. Sys.)

年 卷 期:2022年第33卷第12期

页      面:7448-7460页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:General Research Fund (GRF) 

主  题:Cloud computing Pricing Heuristic algorithms Convergence Mathematical model Dynamic scheduling Computational modeling Cloud resources pricing reinforcement learning (RL) value projection 

摘      要:How to generate more revenues is crucial to cloud providers. Evidences from the Amazon cloud system indicate that ``dynamic pricing would be more profitable than ``static pricing. The challenges are: How to set the price in real-time so to maximize revenues? How to estimate the price dependent demand so to optimize the pricing decision? We first design a discrete-time based dynamic pricing scheme and formulate a Markov decision process to characterize the evolving dynamics of the price-dependent demand. We formulate a revenue maximization framework to determine the optimal price and theoretically characterize the ``structure of the optimal revenue and optimal price. We apply the Q-learning to infer the optimal price from historical transaction data and derive sufficient conditions on the model to guarantee its convergence to the optimal price, but it converges slowly. To speed up the convergence, we incorporate the structure of the optimal revenue that we obtained earlier, leading to the VpQ-learning (Q-learning with value projection) algorithm. We derive sufficient conditions, under which the VpQ-learning algorithm converges to the optimal policy. Experiments on a real-world dataset show that the VpQ-learning algorithm outperforms a variety of baselines, i.e., improves the revenue by as high as 50% over the Q-learning, speedy Q-learning, and adaptive real-time dynamic programming (ARTDP), and by as high as 20% over the fixed pricing scheme.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分