Project: #IITM-250601-162
Safe and Intelligent Multi-Robot Coordination via Reinforcement Learning
In recent years, multi-robot systems have garnered significant attention due to their potential to collaboratively accomplish complex tasks that are infeasible for a single robot. Applications range from industrial automation and environmental monitoring to disaster response and warehouse logistics. As these systems grow in scale and complexity, traditional rule-based control methods face challenges in adaptability, scalability, and safety—especially in dynamic or partially unknown environments. Reinforcement Learning (RL), with its capacity to learn optimal control strategies through interaction, offers a promising avenue to address these limitations by enabling autonomous and cooperative behavior in multi-robot settings.
This project explores the development of safe and intelligent cooperative control strategies for multi-robot systems using reinforcement learning. The research will be conducted on a physical testbed of up to six mobile robots, providing a practical platform for validating simulation results in real-world conditions. Our study focuses on key challenges at the intersection of AI and robotics, including (1) cooperative navigation in dynamic and obstacle-rich environments, (2) conflict-free path planning and collision avoidance, (3) optimal coverage of a target area, and (4) safe interaction in mixed autonomy scenarios where some robots are teleoperated while others operate autonomously.
While recent advancements have shown success in training individual robots using RL, applying RL to multi-agent systems introduces non-stationarity, sparse rewards, and coordination complexities that remain open research challenges. Furthermore, most existing works overlook safety constraints or treat them in post-processing, which is inadequate for real-time applications in shared spaces. This project aims to bridge these gaps by integrating safe RL frameworks and multi-agent coordination techniques, enabling robots to learn decentralized policies that ensure both task performance and operational safety.
Research Objectives:
1. Design cooperative reinforcement learning algorithms that enable multiple robots to learn coordinated behaviors for coverage and navigation tasks.
2. Develop safety-aware training protocols that enforce real-time collision avoidance between robots and with static or dynamic obstacles.
3. Investigate shared autonomy frameworks, focusing on how teleoperated and autonomous robots can co-exist and interact safely in a common workspace.
4. Validate developed methods in a physical multi-robot testbed, measuring performance in terms of coverage efficiency, safety metrics, and adaptability to changing environments.
By grounding reinforcement learning strategies in real-world robotic experimentation, this project contributes both to the theoretical understanding of safe multi-agent learning and to practical deployments of intelligent robotic teams in shared, dynamic environments.