General Chairs
Louis-Philippe Morency (Univ. of Southern California)
Dan Bohus (Microsoft Research)
Hamid Aghajan (Stanford Univ.)
Program Chairs
Justine Cassell (Carnegie Mellon Univ.)
Anton Nijholt (Univ. of Twente)
Julien Epps (The Univ. of New South Wales, Australia)
Workshop Chairs
Rainer Stiefelhagen (Karlsruhe Institute of Technology)
Toyoaki Nishida (Kyoto Univ.)
Publication Chairs
Ginevra Castellano (Univ. of Birmingham)
Kenji Mase (Nagoya Univ.)
Demo Chairs
Wolfgang Minker (Ulm Univ.)
Ramón López-Cózar Delgado (Univ. of Granada)
Doctoral Consortium Chairs
Carlos Busso (Univ. of Texas at Dallas)
Bilge Mutlu (Univ. of Wisconsin-Madison)
Multimodal Grand Challenge Chairs
Daniel Gatica-Perez (IDIAP)
Stefanie Tellex (MIT)
Publicity Chairs
Kazuhiro Otsuka (NTT)
Michael Johnston (AT&T Labs)
Sponsorship Chairs
Nicu Sebe (Univ. of Trento)
Patrick Ehlen (AT&T)
Local Organization Chair
Alesia Egan (Univ. of Southern California)
Treasurer
Alesia Egan (Univ. of Southern California)
Web Chair
Chih-Wei Chen (Stanford Univ.)
|
Call for Papers
The International Conference on Multimodal Interaction, ICMI 2012, will take place in Santa Monica California (USA), October 22-26, 2012. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2012 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will be followed by workshops. The proceedings of ICMI 2012 will be published by ACM as part of their series of International Conference Proceedings.
Topics of interest include but are not limited to:
- Multimodal Interaction Processing
Machine learning, pattern recognition and signal processing approaches for the analysis and modeling of multimodal interaction between people and among the different modalities within people; adaptation and multimodal input fusion and output generation, addressing any combination of: vision, gaze, audio, speech, smell/olfaction, taste, gestures, e-pen, haptic and tangible, bio-signals such as brain, skin conductivity, etc.
- Interactive systems and applications
Mobile and ubiquitous systems, automotive and navigation systems, human-robot and human-virtual agent interaction, virtual and augmented reality, education, authoring, entertainment, gaming, telepresence, assistive and prosthetic systems, brain-computer interfaces, universal access, healthcare, biometry, intelligent environments, meeting analysis and meeting spaces, indexing, retrieval and summarization, etc.
- Modeling human communication patterns
The modalities and the applications named above lead to a need for multimodal models of human-human and human-machine communication, including verbal and nonverbal interaction, affordances of different modalities, multimodal discourse and dialogue modeling, modeling of culture as it pertains to multimodality, long-term multimodal interaction, multimodality in social and affective interaction, multimodal social signal processing.
- Data, evaluation and standards for Multimodal Interactive systems
Design issues, principles and best practices and authoring techniques for human-machine interfaces using any combinations of input and/or output multiple modalities. Architectures; assessment techniques and methodologies; corpora; annotation and browsing of multimodal interactive data; W3C and other standards for multimodal interaction and interfaces, evaluation techniques for multimodal systems.
Submissions in these areas are invited for workshop topics, special session topics, full papers, short papers, demos, and for the doctoral consortium.
|