Autonomous mapping and navigation of a mobile robot using Reinforcement Learning

Mapping and navigation for autonomous mobile robots is a well understood studied topic in which the robot generally has to perform a navigational task (e.g. getting from point A to B) or a mapping task (e.g. generating an occupancy map of the environment). 

In both cases, the goal is to have the robot navigate an unseen environment, using a collision-free path. This raises several issues such as localization, movement capabilities and obstacles in the environment.

This paper addresses these problems using a DDPG + RBPF SLAM approach in order to determine to what extent this combination is capable of mapping an unseen, simulated 3d environment. The experiments were performed in two different domestic environments consisting of multiple walls in which the total open area was identical, however, the walls were arranged in a different manner. It was shown that in both instances the algorithm was able to complete the map with normalized map completeness of $38\%$ and $42\%$, respectively.