Spoken face-to-face communication is likely to be the most important means of interaction with robots in the future. In addition to speech technology, this also requires the use of visual information in the form of facial expressions, lip movement and gaze. Human-robot interaction is also naturally situated, which means that the situation in which the interaction takes place is of importance. In such settings, there might be several speakers involved (multi-party interaction), and there might be objects in the shared space that can be referred to. Recent years have seen an increased interest in modelling such communication for human-robot interaction.
In this tutorial, we will use the Furhat social robot platform as a tool to explore human-robot interaction modelling. The tutorial will start with a hands-on session to get acquainted with the Furhat platform and show how different interaction patterns can be implemented. In the afternoon session, we will give the theoretical background of spoken face-to-face interaction and how this applies to human-robot interaction. We will then go through the state-of-the-art of the different technologies needed and how this kind of interaction can be modelled. We will finish the tutorial with hands-on exercises on how to program human-robot interaction for a social robot using the Furhat platform.
The tutorial is organised as a series of presentations, demos, follow-along examples. The explicit goal of the tutorial is for every participant to have created their own example interaction based on their interest. We expect participants to participate with a collaborate spirit.
The tutorial is restricted to max 20 participants.
Part 1: 14:15–16:30
Part 2: 17:15-19:15