For Robotic Co-workers, Human-Robot tool handovers are a necessary and fundamental skill. Humans make handovers seem seamless yet there is complex coordination that takes place for a successful handover, namely coordinating the What, When, and Where of a handover. In literature the handover task consists of a pre-handover phase where communication is used to establish intent and sets the handover location, and secondly a physical handover, the collaborative effort to transfer the object load between the agents.
In most other work the human-robot handover depends on motion capture systems. However, in order to realize a human-robot handover as autonomously as possible, the robot should use its onboard sensing to navigate and coordinate with a human partner, ultimately achieving the task.
Within the aerial-core project, an Aerial Co-Working (ACW) system, which is being developed at the University of Twente in collaboration with multiple institutions, aims to collaborate and assist humans at heights on power lines. The ACW system is composed of multi-aerial robots with different functionalities, ranging from physical interaction to inspection and safety.
With that in mind, the goal of this thesis is to develop an on-board vision-based human-robot handover system that autonomously
collaborates with a human:
System Requirements:
- The system has to be light and fast enough to allow for control scheme.
- The system should work in a realistic environment.
- The handover should be safe and as comfortable as possible for the human.
The system is build starting from the body pose estimation packages provided by the Artificial Intelligence and Information Analysis Lab at Aristotle University of Thessaloniki.
Furthermore, the handover process is roughly divided into steps:
(1) Approach Human at a safe distance,
(2) wait for initiation gesture,
(3) approach to just outside the interaction zone,
(4) wait for start of handover (2\textsuperscript{nd} gesture or motion prediction),
(5) move end-effector towards handover location,
(6) execute physical handover, and
(7) safely depart.
During the process Vision and Pose estimation is used to understand human intent and plan.
Interesting References:
- P. Kratzer, M. Toussaint and J. Mainprice, "Prediction of Human Full-Body Movements with Motion Optimization and Recurrent Neural Networks," 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 1792-1798,
https://doi.org/10.1109/ICRA40945.2020.9197290.
- Medina, José R., et al. "A human-inspired controller for fluid humanrobot handovers." 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids). IEEE, 2016.
- Ortenzi, V., Cosgun, A., Pardi, T., Chan, W., Croft, E., & Kulic, D. (2020). Object Handovers: a Review for Robotics, (July), 1–20.
https://doi.org/10.1109/TRO.2021.3075365
Human-aware motion control in Aerial Human-Robot Handovers
Finished: 2022-02-17
MSc assignment