张严心, 孔涵, 殷辰堃, 王子豪, 黄志清. 一类基于概率优先经验回放机制的分布式多智能体软行动-评论者算法[J]. 北京工业大学学报, 2023, 49(4): 459-466. DOI: 10.11936/bjutxb2022110019
    引用本文: 张严心, 孔涵, 殷辰堃, 王子豪, 黄志清. 一类基于概率优先经验回放机制的分布式多智能体软行动-评论者算法[J]. 北京工业大学学报, 2023, 49(4): 459-466. DOI: 10.11936/bjutxb2022110019
    ZHANG Yanxin, KONG Han, YIN Chenkun, WANG Zihao, HUANG Zhiqing. Distributed Multi-agent Soft Actor-Critic Algorithm With Probabilistic Prioritized Experience Replay[J]. Journal of Beijing University of Technology, 2023, 49(4): 459-466. DOI: 10.11936/bjutxb2022110019
    Citation: ZHANG Yanxin, KONG Han, YIN Chenkun, WANG Zihao, HUANG Zhiqing. Distributed Multi-agent Soft Actor-Critic Algorithm With Probabilistic Prioritized Experience Replay[J]. Journal of Beijing University of Technology, 2023, 49(4): 459-466. DOI: 10.11936/bjutxb2022110019

    一类基于概率优先经验回放机制的分布式多智能体软行动-评论者算法

    Distributed Multi-agent Soft Actor-Critic Algorithm With Probabilistic Prioritized Experience Replay

    • 摘要: 针对实际多智能体系统对交互经验的庞大需求,在单智能体领域分布式架构的基础上,提出概率经验优先回放机制与分布式架构并行的多智能体软行动-评论者算法(multi-agent soft Actor-Critic with probabilistic prioritized experience replay based on a distributed paradigm,DPER-MASAC). 该算法中的行动者以并行与环境交互的方式收集经验数据,为突破单纯最近经验在多智能体高吞吐量情况下被高概率抽取的局限性,提出更为普适的改进的基于优先级的概率方式对经验数据进行抽样利用的模式,并对智能体的网络参数进行更新. 为验证算法的效率,设计了难度递增的2类合作和竞争关系共存的捕食者-猎物任务场景,将DPER-MASAC与多智能体软行动-评论者算法(multi-agent soft Actor-Critic,MASAC)和带有优先经验回放机制的多智能体软行动-评论者算法(multi-agent soft Actor-Critic with prioritized experience replay,PER-MASAC)2种基线算法进行对比实验. 结果表明,采用DPER-MASAC训练的捕食者团队其决策水平在最终性能和任务成功率2个维度上均有明显提升.

       

      Abstract: Aiming at a huge demand for interaction data in practical multi-agent tasks, based on the distributed architecture in the single-intelligent domain, a multi-agent soft Actor-Critic reinforcement learning algorithm together with probabilistic prioritized experience replay and distributed architecture (DPER-MASAC) was proposed. In DPER-MASAC, workers collect experience data by interacting with environments simultaneously. To break through the limitation of purely recent experience being extracted with high probability in the case of multi-agent system of high throughput, a more universal and improved mode based on probability of priority was put forward to sample and utilize experience data, and the network parameters of agents will be updated. To verify the efficiency of DPER-MASAC, comparative experiments were conducted in two types of predator-prey environment in which both cooperation and competition exist among multiple agents. Meanwhile multi-agent soft Actor-Critic (MASAC) and multi-agent soft Actor-Critic with prioritized experience replay (PER-MASAC) were regarded as two baseline algorithms, compared with DPER-MASAC in this environment with gradually incremental-difficulty. In terms of the final performance and success rate, results indicate that the policy of predators, which is trained by DPER-MASAC, performs optimally.

       

    /

    返回文章
    返回