咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Inverse reinforcement learning... 收藏

Inverse reinforcement learning via nonparametric spatio-temporal subgoal modeling

经由 nonparametric 学习时间空间的 subgoal 当模特儿的反的加强

作     者:Šošić, Adrian Zoubir, Abdelhak M. Rueckert, Elmar Peters, Jan Koeppl, Heinz 

作者机构:Signal Processing Group Technische Universität Darmstadt Darmstadt64283 Germany Institute for Robotics and Cognitive Systems University of Lübeck Lübeck23538 Germany Autonomous Systems Labs Technische Universität Darmstadt Darmstadt64289 Germany Bioinspired Communication Systems Technische Universität Darmstadt Darmstadt64283 Germany 

出 版 物:《Journal of Machine Learning Research》 (J. Mach. Learn. Res.)

年 卷 期:2018年第19卷第1期

页      面:2777-2821页

核心收录:

学科分类:08[工学] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No #713010 (GOALRobots) and No #640554 (SKILLS4ROBOTS) 

主  题:Demonstrations 

摘      要:Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention. Instead of learning a global behavioral model, recent IRL methods divide the demonstration data into parts, to account for the fact that different trajectories may correspond to different intentions, e.g., because they were generated by different domain experts. In this work, we go one step further: using the intuitive concept of subgoals, we build upon the premise that even a single trajectory can be explained more efficiently locally within a certain context than globally, enabling a more compact representation of the observed behavior. Based on this assumption, we build an implicit intentional model of the agent’s goals to forecast its behavior in unobserved situations. The result is an integrated Bayesian prediction framework that significantly outperforms existing IRL solutions and provides smooth policy estimates consistent with the expert’s plan. Most notably, our framework naturally handles situations where the intentions of the agent change over time and classical IRL algorithms fail. In addition, due to its probabilistic nature, the model can be straightforwardly applied in active learning scenarios to guide the demonstration process of the expert. © 2018 Adrian Šošić, Elmar Rueckert, Jan Peters, Abdelhak M. Zoubir and Heinz Koeppl.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分