A possible solution for alleviating the health care worker shortage could be found in robotic Learning from Demonstration (LfD). In robotic LfD, a robots learns a new task based on a few demonstration from an expert. Currently, the LfD framework are capable of learning low-level tasks, but it remains unclear how well these framework generalizes.
This exploratory study investigates whether it is possible to capture a human controller with non-reinforcement learning LfD frameworks for closing a valve task. GMM was deemed to be most suitable framework to capture a human controller. Its performance was evaluated by reproducing a P(D)-controller and admittance controller combined with a mass-damper. Additionally, the controller in the valve model was also learned. The goal was to generalize over the initial conditions. The performance was evaluated by visual inspection and the RMSE between the demonstration (ground truth) and reproduction. It lead to the result that GMM/GMR is capable of learning and reproducing the PD-controller, but with a limited extrapolation capability. The extrapolation is limited by the responsibility and the spread of the learning data. On the contrary, the GMM/GMR was not capable to capture the admittance model due to a large spread of demonstration data which results in a non-linear learning space. That limits the extrapolation capability and in this case also the learning.
In conclusion GMM is not suitable to capture a human controller.