Project: #143

Reinforcement learning control of unmanned aerial vehicles (UAVs) for cooperative payload transport

Campus: Geelong Waurn Ponds Campus
Available

One of the main objectives of the research project is to enable robust control of UAV networks for cooperative payload transport in presence of time varying uncertainties. The use of multiple UAVs for transporting large payloads has applications in disaster relief operations for regions with limited connectivity, warehouse automation and flexible manufacturing using robots for material handling operations. One of the main challenges in the control of the multi-UAV network is the presence of higher-order nonlinear dynamics which are subjected to uncertainties such as internet latency and external disturbances. Such latencies and disturbances are common in robotic systems and can occur either due to time lost during sensing or communication between neighboring agents, or due to local agent dynamics, as well as the interaction between the agents and the working environment. Reinforcement learning (RL) control that intelligently harnesses information about the delays and disturbances is proposed to maintain stability in the multi-UAV network during the cooperative payload transport task. RL is a promising approach which can achieve high success rates in collision-free navigation and precise trajectory tracking of UAVs, even in the presence of disturbances like wind gusts. While significant progress has been made in developing RL-based controllers for multi-UAV systems, there is limited work addressing time-varying uncertainties during payload transport. The proposed project aims to investigate the effects of time varying uncertainties on the stability of the cooperative transport task, and develop RL control approaches for compensating the uncertainties. ;