Tutorials

"Conversational interaction with social robots" by Furhat Robotics

Sign up
Furhat Robotics

Description: Spoken face-to-face communication is likely to be the most important means of interaction with robots in the future. In addition to speech technology, this also requires the use of visual information in the form of facial expressions, lip movement and gaze. Human-robot interaction is also naturally situated, which means that the situation in which the interaction takes place is of importance. In such settings, there might be several speakers involved (multi-party interaction), and there might be objects in the shared space that can be referred to. Recent years have seen an increased interest in modelling such communication for human-robot interaction.

In this tutorial, we will use the Furhat social robot platform as a tool to explore human-robot interaction modelling. The tutorial will start with a hands-on session to get acquainted with the Furhat platform and show how different interaction patterns can be implemented. In the afternoon session, we will give the theoretical background of spoken face-to-face interaction and how this applies to human-robot interaction. We will then go through the state-of-the-art of the different technologies needed and how this kind of interaction can be modelled. We will finish the tutorial with hands-on exercises on how to program human-robot interaction for a social robot using the Furhat platform.

The tutorial is organised as a series of presentations, demos, follow-along examples. The explicit goal of the tutorial is for every participant to have created their own example interaction based on their interest. We expect participants to participate with a collaborate spirit.

The tutorial is restricted to max 20 participants.

Schedule:

Part 1: 13:15-16:45

  • Principles in Social robotics application specifics
  • Voice interactions
  • Situated interactions
  • Demo of design patterns in social robotics applications
  • Set up Furhat SDK on participants computers
  • Follow along with the tutorial of creating your first skill
  • Sketch your own Interaction using the visual prototyping tool
  • Part 2: 17:00-20:15

  • Recap of Part 1
  • A closer look at the State-chart framework and NLU engine of the Furhat Platform
  • Creating your own interaction
  • It is possible to only participate in either of the two sessions. If you decide to only join the second part

  • it is required that you get the SDK running and set up your developer environment and get acquainted with the Furhat Platform beforehand by following the guide at docs.furhat.io.
  • Pre-requisites:

  • The tutorial requires no background in design, sketching or prototyping
  • Basic experience of object-oriented programming is preferred, but not required. Any experience with Kotlin will be of great help
  • Experience with the Furhat platform is not required
  • Target audience:
    The tutorial is intended for researchers interested in getting hands-on experience designing and implementing a social robot application.

    We anticipate that those who will benefit most will include: engineers developing conversational interactions for use in upcoming studies, social scientists working with engineers designing robot interactions, researchers designing experiments with a situated conversational agent.

    Speakers:

  • Gabriel Skantze, Professor in Speech Technology at KTH Royal Institute of Technology in Stockholm and Co-founder and Chief Scientist of Furhat Robotics
  • Nils Hagberg, Product Owner of FurhatX for Research and Innovation
  • Preparation:

  • Request access to the Furhat SDK from furhat.io - available for Unix, Mac and Windows
  • Install Jet brains IntelliJ IDE (community edition)
  • Date and Time
    Thursday, October 29, 2020, 13:15 - 20:15 (CET/UTC+1)


    ICMI 2020 ACM International Conference on Multimodal Interaction. Copyright © 2019-2020