The ICMI workshop program aims to provide researchers with a more informal and discussion oriented forum to discuss emerging topics in multimodal interaction or revisit established research areas from a new angle. This year we received a record number of high-quality workshop submissions and selected the following five workshops to be held on the last day of the conference:

  1. Multimodal, Multi-Party, Real-World Human-Robot Interaction (program available)
  2. Understanding and modeling multiparty, multimodal interactions (program available)
  3. Roadmapping the Future of Multimodal Interaction Research including Business (program available)
  4. 2nd International Workshop on “Emotion representations and modelling in Human-Computer Interaction systems” (ERM4HCI 2014) (program available)
  5. The 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality (program available)

The first three workshops are new additions to ICMI, while the last two are follow-ups on previously held workshops. A Workshop on Smart Material Interfaces was planned, but cancelled afterwards.

ICMI 2014 Workshop Chairs

Alexandros Potamianos (National Technical University of Athens),
Carlos Busso (The University of Texas at Dallas, USA),

Multimodal, Multi-Party, Real-World Human-Robot Interaction

Organizers: Mary Ellen Foster, Manuel Giuliani, and Ron Petrick

The development of robots capable of interacting with humans has made tremendous progress in the last decade, leading to an expectation that in the near future, robots will be increasingly deployed in public spaces, for example as receptionists, shop assistants, waiters, or bartenders. In these scenarios, robots must necessarily deal with situations that require human-robot interactions that are short and dynamic, and where the robot has to be able to deal with multiple persons at once. To support this form of interaction, robots typically require specific skills, including robust video and audio processing, fast reasoning and decision-making mechanisms, and natural and safe output path planning algorithms. This physically embodied, dynamic, real-world context is the most challenging possible domain for multimodal interaction: for example, the state of the physical environment may change at any time; the input sensors must deal with noisy and uncertain input; while the robot platform must combine interactive social behaviour with physical task-based action such as moving and grasping. This workshop aims to bring together researchers from a range of relevant disciplines in order to explore the challenges and solutions for multimodal interaction in this area from different perspectives.


Please visit our website for more information:

Understanding and modeling multiparty, multimodal interactions

Organizers: Samer Al Moubayed, Dan Bohus, Anna Esposito, Dirk Heylen, Maria Koutsombogera, Harris Papageorgiou, Gabriel Skantze

Analysis and understanding of human-human conversations has stressed the importance of modeling all available verbal and non-verbal signals occurring in conversations to develop human-machine interfaces that are capable of interpreting the multimodal signals in human conversations as well as generating natural and synchronized responses. While the focus of research has been primarily to dyadic interactions, multiparty interactions as communicative setups involving more than two participants are a complex, yet challenging construct that merits attention. Understanding and modelling the multiparty configuration and the underlying affective and social behavior of the participants address the design of interfaces that (a) can follow and participate in the conversation, (b) present interactional skills to control the interaction flow, (c) respond to it in the appropriate timing and as naturally as possible, (d) keep track of the multimodal conversation of the participants as well as (e ) guarantee a high and balanced level of involvement between them. This workshop aims to explore this growing area of multiparty multimodal interaction by bridging this multidisciplinary area and bringing together researchers from domains of dialog systems, multimodal conversation analysis, multimodal user interfaces and multimodal signal processing.


Please visit our website for more information:

Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges

Organizers: Dirk Heylen, Alessandro Vinciarelli

The Workshop "Roadmapping the Future of Multimodal Interaction Research, and Business Opportunities and Challenges invites papers from researchers, people from industry, policy makers and other visionaries to identify the state- of-the art and the future of research on multimodal interaction and the related fields such as affective computing and social signal processing. Besides the attention to research questions and challenges (including reflection on the shortcomings of current methods and proposals for innovation on this topic), the workshop focuses on societal challenges and business opportunities. What can be the impact of the research, what needs to be done to achieve this. The workshop is also a place to discuss current best practices.

Please visit our website for more information:

2nd International Workshop on “Emotion representations and modelling in Human-Computer Interaction systems ” (ERM4HCI 2014)

Organizers: Kim Hartmann, Björn Schuller, Ronald Böck, Klaus R. Scherer

To develop user adaptable Human-Computer Interaction (HCI), the role of emotions occurring during interaction gained in attention over the past years. Emotions, being widely accepted as essential to Human-Human interaction, became increasingly interesting for system designers of affective interfaces in order to provide natural, user-centred interaction. However, to adequately incorporate emotions in modern HCI systems, results from varying research disciplines must be combined. The 2nd ERM4HCI concentrates on emotion representations, the characteristics used to describe and identify emotions and their relation to personality and user state models (such as age, gender, physical/cognitive load, etc.). Researchers are encouraged to discuss possible interdependencies of characteristics on an intra- and inter- modality level. Interdependencies of characteristics may occur if two characteristics are influenced by the same physiological change in the observed user, but other factors (technical, constructive, etc.) can cause interdependencies as well. The workshop aims at identifying a minimal set of characteristics to represent and recognise emotions in multi-modal affective HCI. The workshop addresses some of the typical issues arising in multi-modal data processing for affective systems, such as timing aspects, confidence metrics, discretisation issues and issues related to the translation between different emotion models.


Please visit our website for more information:

7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality

Organizers: Hung-Hsuan Huang, Roman Bednarik, Kristiina Jokinen, Yukiko Nakano

This is the seventh meeting in the series of workshops for Eye Gaze in Intelligent Human Machine Interaction. Previous workshops have discussed a wide range of issues for eye gaze; technologies for sensing human attentional behaviors, roles of attentional behaviors as social gaze in human-human and human-humanoid interaction, attentional behaviors in problem-solving and task-performing, gaze-based intelligent user interfaces, evaluation of gaze-based UI, eye gaze in multimodal interpretation and generation. In addition to these previous topics, this year’s workshop especially welcomes the contributions on real-world applications and the integration of the knowledge from humanities fields and the technologies on mobile platforms where remark progress has been achieved in recent years. This workshop aims to continue exploring this growing area of research by bringing together researchers including human sensing, multimodal processing, humanoid interfaces, intelligent user interfaces, and communication science.


Please visit our website for more information:

ICMI 2014 ACM International Conference on Multimodal Interaction. 12-16th November 2014, Istanbul, Turkey. Copyright © 2010-2024