咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >An Analysis of Discretization ... 收藏
arXiv

An Analysis of Discretization Methods for Communication Learning with Multi-Agent Reinforcement Learning

作     者:Vanneste, Astrid Vanneste, Simon Mets, Kevin De Schepper, Tom Mercelis, Siegfried Latré, Steven Hellinckx, Peter 

作者机构:University of Antwerp - Imec IDLab Faculty of Applied Engineering Antwerp Belgium University of Antwerp - Imec IDLab Department of Computer Science Antwerp Belgium 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2022年

核心收录:

主  题:Discrete event simulation 

摘      要:Communication is crucial in multi-agent reinforcement learning when agents are not able to observe the full state of the environment. The most common approach to allow learned communication between agents is the use of a differentiable communication channel that allows gradients to flow between agents as a form of feedback. However, this is challenging whenwe want to use discrete messages to reduce the message size since gradients cannot flow through a discrete communication channel. Previous work proposed methods to deal with this problem. However, these methods are tested in different communication learning architectures and environments, making it hard to compare them. In this paper, we compare several state-of-the-art discretization methods as well as two methods that have not been used for communication learning before. We do this comparison in the context of communication learning using gradients from other agents and perform tests on several environments. Our results show that none of the methods is best in all environments. The best choice in discretization method greatly depends on the environment. However, the discretize regularize unit (DRU), straight through DRU and the straight through gumbel softmax show the most consistent results across all the tested environments. Therefore, these methods prove to be the best choice for general use while the straight through estimator and the gumbel softmax may provide better results in specific environments but fail completely in others. Copyright © 2022, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分