咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ParMod: A Parallel and Modular... 收藏
arXiv

ParMod: A Parallel and Modular Framework for Learning Non-Markovian Tasks

作     者:Miao, Ruixuan Lu, Xu Tian, Cong Yu, Bin Duan, Zhenhua 

作者机构:Institute of Computing Theory and Technology State Key Laboratory of ISN Xidian University China 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Contrastive Learning 

摘      要:The commonly used Reinforcement Learning (RL) model, MDPs (Markov Decision Processes), has a basic premise that rewards depend on the current state and action only. However, many real-world tasks are non-Markovian, which has long-term memory and dependency. The reward sparseness problem is further amplified in non-Markovian scenarios. Hence learning a non-Markovian task (NMT) is inherently more difficult than learning a Markovian one. In this paper, we propose a novel Parallel and Modular RL framework, ParMod, specifically for learning NMTs specified by temporal logic. With the aid of formal techniques, the NMT is modulaized into a series of sub-tasks based on the automaton structure (equivalent to its temporal logic counterpart). On this basis, sub-tasks will be trained by a group of agents in a parallel fashion, with one agent handling one sub-task. Besides parallel training, the core of ParMod lies in: a flexible classification method for modularizing the NMT, and an effective reward shaping method for improving the sample efficiency. A comprehensive evaluation is conducted on several challenging benchmark problems with respect to various metrics. The experimental results show that ParMod achieves superior performance over other relevant studies. Our work thus provides a good synergy among RL, NMT and temporal logic. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分