Proof-of-Concept
We introduce Control Admissibility Model, which approximates the invariant set for safety and is aware of future consequences.
Deep reinforcement learning in continuous domains focuses on learning control policies that map states to distributions over actions that ideally concentrate on the optimal choices in each step. In multi-agent navigation problems, the optimal actions depend heavily on the agents' density. Their interaction patterns grow exponentially with respect to such density, making it hard for learning-based methods to generalize.
We propose to switch the learning objectives from predicting the optimal actions to predicting sets of admissible actions, which we call control admissibility models (CAMs), such that they can be easily composed and used for online inference for an arbitrary number of agents. We design CAMs using graph neural networks and develop training methods that optimize the CAMs in the standard model-free setting, with the additional benefit of eliminating the need for reward engineering typically required to balance collision avoidance and goal-reaching requirements. We evaluate the proposed approach in multi-agent navigation environments.
We show that the CAM models can be trained in environments with only a few agents and be easily composed for deployment in dense environments with hundreds of agents, achieving better performance than state-of-the-art methods.
We introduce Control Admissibility Model, which approximates the invariant set for safety and is aware of future consequences.
Defining the observations as graphs, our method can be applied to higher dimensional multi-agent navigation tasks with complex configuration space.
Zero-shot transfering the learned CAM to a new task, where each agent is required to chase another agent with no collision.
@inproceedings{
yu2022learning,
title={Learning Control Admissibility Models with Graph Neural Networks for Multi-Agent Navigation},
author={Chenning Yu and Hongzhan Yu and Sicun Gao},
booktitle={6th Annual Conference on Robot Learning},
year={2022},
url={https://openreview.net/forum?id=xC-68ANJeK_}
}