State representation learning using robotic priors in continuous action spaces for mobile robot navigation

Finished: 2020-08-28

MSc assignment

State representation learning in the context of robotics can be used to increase the efficiency of learning a task, while keeping a high performance.

One way of learning such a state representation in the context of robotics is to make use of robotic priors. Learning a state representation using robotic priors has mainly been applied in fully observable, Markovian, environments. In this thesis the case of a partially observable environment will be considered. To make the state Markovian, a recurrent neural network will be used to map the partially observable observations to a fully observable state.

This will be used to discover the limits of the robotic priors proposed in literature, and possible extensions to these robotic priors could be proposed.