咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Distributed Q-Learning Algorit... 收藏

Distributed Q-Learning Algorithm for Dynamic Resource Allocation With Unknown Objective Functions and Application to Microgrid

作     者:Dai, Pengcheng Yu, Wenwu Chen, Duxin 

作者机构:Southeast Univ Sch Math Jiangsu Key Lab Networked Collect Intelligence Nanjing 210096 Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON CYBERNETICS》 (IEEE Trans. Cybern.)

年 卷 期:2022年第52卷第11期

页      面:12340-12350页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:National Natural Science Foundation of China [62073076, 61903079] Jiangsu Provincial Key Laboratory of Networked Collective Intelligence [BM2017002] Fundamental Research Funds for the Central Universities [2242019K40111] 

主  题:Resource management Cost function Approximation algorithms Heuristic algorithms Training Distributed algorithms Smart grids Distributed optimization distributed Q-learning dynamic resource allocation function approximation 

摘      要:Dynamic resource allocation problem (DRAP) with unknown cost functions and unknown resource transition functions is studied in this article. The goal of the agents is to minimize the sum of cost functions over given time periods in a distributed way, that is, by only exchanging information with their neighboring agents. First, we propose a distributed Q-learning algorithm for DRAP with unknown cost functions and unknown resource transition functions under discrete local feasibility constraints (DLFCs). It is theoretically proved that the joint policy of agents produced by the distributed Q-learning algorithm can always provide a feasible allocation (FA), that is, satisfying the constraints at each time period. Then, we also study the DRAP with unknown cost functions and unknown resource transition functions under continuous local feasibility constraints (CLFCs), where a novel distributed Q-learning algorithm is proposed based on function approximation and distributed optimization. It should be noted that the update rule of the local policy of each agent can also ensure that the joint policy of agents is an FA at each time period. Such property is of vital importance to execute the epsilon-greedy policy during the whole training process. Finally, simulations are presented to demonstrate the effectiveness of the proposed algorithms.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分