Learning to explore and map with EKF SLAM and RL

Autonomously explore unknown environments and building maps is a crucial skill for mobile robots. Unlike many existing algorithms that focus on improving obstacle avoidance or optimizing map coverage using discrete actions, a new exploration algorithm is presented that seeks to optimize both coverage and map accuracy. The novel trajectory-based EKF-SLAM algorithm uses two Reinforcement Learning (RL) agents together with EKF-SLAM to achieve this goal. The RL agents are implemented in a hierarchical manner.

The first RL agent (high-level policy) overviews the general movement of the robot and ensures complete map coverage through the generation of sensing locations. The second RL agent (low-level policy) generates informative and smooth trajectories using a Bezier parameterization to reach these sensing locations. The trajectory-based EKF-SLAM algorithm is trained and tested in a 2D Python environment.

To join the presentation via Microsoft Teams click here