Uncertainty-aware human pose estimation and vision-guided handover for collaborative aerial robots

Finished: 2023-09-01

MSc assignment

Collaborative aerial robots need to estimate the pose of human operators in their surroundings in order to perform handover with humans in a safe and natural way. RGB-D sensors mounted on the robot may be exploited by deep learning techniques to perform human pose estimation.

I try to address two challenges at the intersection between these two problems: on the one hand, the robot needs to access an uncertainty estimate of the human pose detected by the vision pipeline, to avoid following incorrect cues; on the other hand, deep learning algorithms usually require powerful and heavy hardware, so the model needs to be optimized as much as possible in order to be run in real-time on the limited hardware of the drone.

What I wish to obtain at the end of my work is an uncertainty-aware, lightweight vision model, able to provide reliable input to an aerial robot controller.