Context: In a world where the impact of travel is becoming ever more problematic, robotic avatar systems will be key to enabling long-distance social connectedness, and to applying one's skills and knowledge in hard-to-reach or dangerous environments, all without the need for special training or the cognitive overload of actively controlling a remote robot. Avatars will enable the user to feel present at, and interact with, a remote environment and the people in it from a distance, as if physically present. Such systems can find application in a broad range of scenarios; from disaster response and maintenance in places that are hard to reach, to care at home and visiting a loved one on the other side of the world. Typically, avatar systems consist of three parts: a robotic avatar that is located in a remote environment, a control pod from which the avatar can be controlled and a structure consisting of VR and control algorithms to connect the pod and the avatar. To create an immersive experience, the system (a) isolates the operator from his/her current environment, (b) provides realistic artificial stimuli based on measurements of the remote environment and (c) provides intuitive control interfaces for the operator to control the avatar in the remote location.
Challenge: One of the main challenges in the development of avatar systems is dealing with the inevitable delays in long-distance communication channels. Delays affect both information transfer from the operator to the avatar, and from the avatar to the operator: the avatar does not know the operator’s intentions, and the operator does not know the current state of the avatar’s environment. Traditional approaches to overcome the effects of delay are overly restrictive and do not allow for effective operation.
Concept: We address the above-described challenges by introducing three loops with a degree of autonomy: one where the avatar has a degree of autonomy to interact with the world (by predicting what the controller will do), one where the controller gets immediate feedback of the state of the world (by predicting the actual state of the world), and one that integrates both and minimizes mismatches. Through these loops, both avatar and operator will be provided with model-based undelayed information while the models are continuously updated over the delayed communication line. Eventually, these loops will operate on a broad range of modalities; audio, vision, smell, haptic, touch, temperature, etc. For this project, the goal is to integrate visual and haptic modalities, building on recent development in VR technology and model mediated control.
The work at RaM focusses on environment modeling and avatar control with haptic feedback.