咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Safe Policy Iteration: A Monot... 收藏

Safe Policy Iteration: A Monotonically Improving Approximate Policy Iteration Approach

作     者:Metelli, Alberto Maria Pirotta, Matteo Calandriello, Daniele Restelli, Marcello 

作者机构:Politecn Milan DEIB Milan Italy Facebook AI Res Paris France Ist Italiano Tecnol Genoa Italy 

出 版 物:《JOURNAL OF MACHINE LEARNING RESEARCH》 (机器学习研究杂志)

年 卷 期:2021年第22卷第1期

页      面:1-83页

核心收录:

学科分类:08[工学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

主  题:Reinforcement Learning Approximate Dynamic Programming Approximate Policy Iteration Policy Oscillation Policy Chattering Markov Decision Process 

摘      要:This paper presents a study of the policy improvement step that can be usefully exploited by approximate policy-iteration algorithms. When either the policy evaluation step or the policy improvement step returns an approximated result, the sequence of policies produced by policy iteration may not be monotonically increasing, and oscillations may occur. To address this issue, we consider safe policy improvements, i.e., at each iteration, we search for a policy that maximizes a lower bound to the policy improvement w.r.t. the current policy, until no improving policy can be found. We propose three safe policy-iteration schemas that differ in the way the next policy is chosen w.r.t. the estimated greedy policy. Besides being theoretically derived and discussed, the proposed algorithms are empirically evaluated and compared on some chain-walk domains, the prison domain, and on the Blackjack card game.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分