Vision-based Environmental Understanding for Motion Coordination

MSc assignment

This thesis explores how visual perception can enhance human motion prediction and improve the coordination between the user and a robotic system. Unlike traditional approaches that rely solely on muscle signals, this project introduces a first-person perspective camera to capture the surrounding environment from the user’s view. Using pre-trained computer-vision models, the system will recognize objects, hand positions, and contextual cues to interpret motion intentions directly from visual information. These visual features will then be combined with surface electromyography (sEMG) signals to achieve more accurate and natural predictions of arm and hand trajectories. The project aims to enable the robotic arm to better understand the user’s intention and the environment, leading to smoother and more coordinated movements. Through this work, students will learn to integrate vision-based environmental understanding with biological signals, contributing to the development of intelligent assistive and collaborative robotic systems.