SAMMI

Software Architectures for Mulit-Modal Interaction

Multi-modal tele-robotics allow for meaningful sensory experiences to be experienced from a distance.    The goal is to allow for complex interactions in a remote environment. Use cases vary but are often related to EOD, search and rescue or tele-medicine.

In our case, modalities are defined as “a channel of input/output with regards to human-robot interaction” [1]. Multi-modal systems therefore use a multitude of these channels to transmit different kinds of interactions. Often used modalities are tactile, kinaesthetic, auditory and visual cues.

The complexity in multi-modal systems is the limited resources (CPU, RAM, Network), of the control PC, which is often in opposition to optimal robot control or human perception. A good example of a multi-modal system can be found in the i-Botics AVATAR system (also linked below).

In the SAMMI project, the focus is on handling the modality requirements imposed by different systems through actively managing them during runtime. This is done using a systems and software engineering approach, such that generalisable solutions will be created, and existing systems can be optimised for both human and control purposes.

Goal of the project

The project aims to define a set of measures that can be used to define both subjective and objective components of a tele-robotics system. When the project is finished, a method to handle resources dynamically during runtime should exist that is easily added to any component-based tele-robotics setup.

Assignment areas

Assignments are mainly focused on the following areas:

  • System Engineering
  • Human-Robot Interaction
  • Software Development
    • ROS and (soft) Real-Time environments

 

[1] “Modality (human-computer interaction),” 02 04 2024. [Online]. Available: https://en.wikipedia.org/wiki/Modality_(human%E2%80%93computer_interaction).

Associated assignments

Associated assignment proposals