Human-robot interaction (HRI) faces challenges in achieving seamless and intuitive communication, particularly due to limitations of vision-based methods such as occlusions and privacy concerns. Surface electromyography (sEMG) provides a wearable alternative for intent recognition, but existing research predominantly focuses on gesture classification rather than continuous motion estimation.
This thesis proposes a novel muscle-driven control framework that uses dual-channel sEMG signals for real-time hand trajectory estimation and robotic following tasks. A Transformer-based deep learning model is trained to decode raw sEMG data into spatial hand positions, while a machine learning mapping directly translates these positions into robotic joint angles for low-latency control. Experimental validation demonstrates the system’s capability in dynamic interactions such as high-fives and object following, with performance benchmarked against a vision-based pose estimation system.
The results highlight the feasibility of sEMG-based continuous motion decoding for camera-free HRI, offering a privacy-preserving and physiologically grounded approach for assistive and collaborative robotics.