Social robotics is a field of robotics where the robot interacts and communicates with physical agents following social cues and norms. They are designed with human-robot interaction in mind and present different levels of autonomy to gain human-like behaviour, as such improving the robot’s perception and acceptance. This is achieved using the HMMM (Heterogeneous Multi-Modal Mixing) toolkit, however the toolkit remains very complex and thus not suited for fast prototyping.
The goal of this project is to create a toolkit which permits easy and rapid prototyping of a social robot. Designed for students and teachers, the toolkit comprises of various off the shelf building blocks for computer vision, speech recognition and visual display (to show emotion and visual expressions). By choosing the correct building blocks and combining them, together with the already existing actuator toolkit, an initial prototype can be made, time and cost effectively. This bachelor thesis is partly derived from the work of Suhaib Aslam who created a toolkit for adults suffering from autism to co-design their own social robot.
Research will be done in assessing the current state of the art components for each field and determining their strengths and weaknesses. Furthermore, the Audeme Movi speech recognition module, the OpenMV M7 camera and a 16x2 LCD screen will be tested more thoroughly due to their increased potential usefulness in future projects. Several scripts will be made and added to the toolkit as example files such as a pan-tilt face tracking device and more. At last, the performance of such a minimal social robot will be evaluated and compared to existing platforms using HMMM such as the EyePi.