殷辰堃, 纪宏萱, 张严心. 复杂可交互场景下基于异策略分层强化学习的 搜救机器人自主决策[J]. 北京工业大学学报, 2023, 49(4): 403-414. DOI: 10.11936/bjutxb2022090006
    引用本文: 殷辰堃, 纪宏萱, 张严心. 复杂可交互场景下基于异策略分层强化学习的 搜救机器人自主决策[J]. 北京工业大学学报, 2023, 49(4): 403-414. DOI: 10.11936/bjutxb2022090006
    YIN Chenkun, JI Hongxuan, ZHANG Yanxin. Autonomous Decision-making of Searching and Rescue Robots Based on Off-policy Hierarchical Reinforcement Learning in a Complex Interactive Environment[J]. Journal of Beijing University of Technology, 2023, 49(4): 403-414. DOI: 10.11936/bjutxb2022090006
    Citation: YIN Chenkun, JI Hongxuan, ZHANG Yanxin. Autonomous Decision-making of Searching and Rescue Robots Based on Off-policy Hierarchical Reinforcement Learning in a Complex Interactive Environment[J]. Journal of Beijing University of Technology, 2023, 49(4): 403-414. DOI: 10.11936/bjutxb2022090006

    复杂可交互场景下基于异策略分层强化学习的 搜救机器人自主决策

    Autonomous Decision-making of Searching and Rescue Robots Based on Off-policy Hierarchical Reinforcement Learning in a Complex Interactive Environment

    • 摘要: 机器人在搜救任务中的自主决策能力对降低救援人员的风险具有重大意义. 为了使机器人在面对复杂多解的搜救任务时能自主形成决策和合理的路径规划,设计了一种异策略分层强化学习算法. 该算法由两层Soft Actor-Critic(SAC)智能体组成,高层智能体可以自动生成低层智能体所需的目标并提供内在奖励指导其直接与环境进行交互. 在分层强化学习的框架下,首先将复杂可交互场景下的机器人搜救任务描述为高层半马尔可夫决策过程与低层马尔可夫决策过程的双层结构,并针对不同层级设计不同的状态空间、动作空间与奖励函数等. 其次,针对传统强化学习算法中目标与奖励函数需要人工设计且缺乏通用性的问题,应用基于SAC的异策略分层强化学习算法训练双足移动机器人与复杂场景交互,通过数据的高效利用和目标空间的调整实现救援机器人的自主决策. 仿真结果验证了所设计的算法在解决复杂多路径搜救任务中的有效性和通用性.

       

      Abstract: The autonomous decision-making of robots in searching and rescue tasks is of great significance for reducing the risk to human rescuers. To make the robot generate decision-making autonomously and path planning reasonably in the face of complex searching and rescue tasks with multi-solution, an off-policy hierarchical reinforcement learning algorithm was designed in this paper. The algorithm consists of two layers of Soft Actor-Critic (SAC) agents, where the higher-level agent can automatically generate goals needed by the lower-level agent and can provide intrinsic reward to guide the lower-level agent to interact with the environment directly. Under the framework of hierarchical reinforcement learning, the robot searching and rescue task in a complex interactive environment was first described as a two-layer structure with a high-level semi-Markov decision process and a low-level Markov decision process. Then different state spaces, action spaces and reward functions at different levels were designed. Next, in view of the problem that the goals and reward functions in traditional reinforcement learning algorithms were needed to design manually, a SAC-based off-policy hierarchical reinforcement learning algorithm was applied to train bipedal mobile robots to interact with the complex environment. The autonomous decision-making of the searching and rescue robots was achieved through efficient use of data and adjustment of goal space. The simulation results verify the effectiveness and generality of the proposed algorithm in solving complex multi-path searching and rescue tasks.

       

    /

    返回文章
    返回