Call for Papers

ICMI (the International Conference on Multimodal Interaction) is the premier international forum for multidisciplinary research on multimodal interaction and multimodal interfaces. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and integrative, multimodal system development.

ICMI'2015 ( will take place between November 9th and 13th at Motif Hotel in Seattle (USA). The main conference is single-track and includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will also feature workshops and grand challenges. The proceedings of ICMI'2015 will be published by ACM as part of their series of International Conference Proceedings.

Important Dates

Long and short paper submission May 15th, 2015 (11:59pm PST)
Reviews available for rebuttal July 21-26, 2015
Paper notification August 17th, 2015 (11:59pm PST)

Topics of interest include but are not limited to:

  • Multimodal signal and interaction processing technologies

    Methods for machine learning, data mining, and adaptation for multimodal signals; multimodal signal processing, inference, and input fusion over combinations of signals (e.g., derived from visual, audio, haptic, bio- and machine sensors), semantic interpretations (e.g., of spoken language, gesture, facial expression, sketch), and hybrid combinations; multimodal output planning and coordination (e.g., coordinated speech and gesture); etc.

  • Multimodal models for human-human and human-machine interaction

    Multimodal models for human-human communication dynamics and collaboration, including verbal and non-verbal interaction; models for physically situated human-computer and human-robot interaction; models for multiparty, group and social interaction; models for multimodal dialogue; models for long-term multimodal interaction; affective computing and interaction models; cognitive, linguistic, psycho-linguistic, cultural models, and other perspectives on multimodal interaction and communication; etc..

  • Multimodal data, evaluation and tools

    Multimodal corpora, resources and tools; evaluation methodologies, assessment and metrics; multimodal annotation methodologies and coding schemes; design issues, principles and best practices for multimodal interfaces; authoring techniques; standards; etc.

  • Multimodal systems and applications

    Ambient intelligence and smart environments; human-robot interaction; embodied conversational agents; multimodal interfaces for internet-of-things; multimodal automotive user interfaces; virtual and augmented reality; meeting spaces and meeting analysis systems; multimodal mobile applications; education; entertainment; healthcare and assistive technologies; affective interfaces; etc.

Submissions in these areas are invited for workshop topics, special session topics, full papers, short papers, demos, and for the doctoral consortium.