Workshops

The ICMI workshop program aims to provide researchers with a more informal and discussion oriented forum to discuss emerging topics in multimodal interaction or revisit established research areas from a new angle. This year we received a record number of high-quality workshop submissions and selected the following four workshops to be held on the last day of the conference:

  1. 1st International Workshop on Advancements in Social Signal Processing for Multimodal Interaction
  2. 1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence INTERPERSONAL
  3. Workshop on Multimodal Deception Detection
  4. 3rd International Workshop on “Emotion representations and modelling for Companion Technologies (ERM4CT 2015)
  5. Developing portable & context-aware multimodal applications for connected devices using W3C Multimodal Architecture

ICMI 2015 Workshop Chairs

Jean-Marc Odobez (IDIAP, Switzerland)
Hayley Hung (Technical University of Delft, Netherlands)

1st International Workshop on Advancements in Social Signal Processing for Multimodal Interaction

Organizers: Khiet Truong, Dirk Heylen, Mohamed Chetouani, Bilge Mutlu, Albert Ali Salah

After a decade of the introduction of social signal processing (SSP) as a research field, we believe it is time to take stock and look into the future. The goal of this workshop is to bring together researchers to discuss recent as well as future developments in SSP for multimodal interaction research: where do we stand now, what are the recent developments in novel methods and application areas, what are the major challenges, and how do we further mature and broaden and increase the impact of SSP? We also believe it is necessary to ensure the quality and advancement of the research in SSP and by training students with the necessary expertise. Since SSP is a relatively new research domain, a textbook for teaching SSP is not available (yet). How we may teach SSP is therefore another topic of interest in this workshop.

We invite both research and position papers and aim for a mix of presentations around recent research and around presentations/discussions about the future of SSP.

CALL FOR PAPERS

Please visit our website for more information:
http://hmi.ewi.utwente.nl/icmi2015-assp4mi/CfP.html

1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence INTERPERSONAL

Organizers: Mohamed Chetouani, Giovanna Varni, Hanan Salam, Zakia Hammal, Jeffrey F. Cohn

Understanding human behavior through computer vision and signal processing has become of major interest with the emergence of social signal processing and affective computing andtheir applications to human-computer interaction. With few exceptions, research has focusedon detection of individual persons, their nonverbal behavior in the context of emotion and related psychosocial constructs. With advances in methodology, there is increasing interest inadvancing beyond the individual to social interaction of multiple individuals. This level of analysis brings to the fore detection and understanding of interpersonal influence and interpersonal synchrony in social interaction.

Interpersonal synchrony in social interaction between interactive partners is the dynamic andreciprocal adaptation of their verbal and nonverbal behaviors. It affords both a novel domain for computer vision and machine learning, as well as a novel context with which to examine individual variation in cognitive, physiological, and neural processes in the interacting members. Interdisciplinary approaches to interpersonal synchrony are encouraged. Investigating these complex phenomena has both theoretical and practical applications.

The proposed workshop will explore the challenges of modeling, recognition, and synthesis of influence and interpersonal synchrony. It will address theory, computational models, and algorithms for the automatic analysis and synthesis of influence and interpersonal synchrony. We wish to explore both influence and interpersonal synchrony in human-human and human-machine interaction in dyadic and multi-person scenarios. Expected topics include definition of different categories of interpersonal synchrony and influence, multimodal corpora annotation of interpersonal influence, dynamics of relevant behavioral patterns, and synthesis and recognition of verbal and nonverbal patterns of interpersonal synchrony and influence.The INTERPERSONal workshop will afford opportunity for discussing new applications such as clinical assessment, consumer behavior analysis, and design of socially aware interfaces.

CALL FOR PAPERS

Please visit our website for more information:
http://interpersonalicmi2015.isir.upmc.fr

Workshop on Multimodal Deception Detection

Organizers: Mohamed Abouelenien, Mihai Burzo, Rada Mihalcea, Veronica Perez-Rosas

The widespread use of deception in offline and online communication suggests the need for methods to automatically detect deceit. The 2015 ACM Workshop on Multimodal Deception Detection (WMDD 2015), held in conjunction with the 17th ACM International Conference on Multimodal Interaction (ICMI 2015), will focus on multimodal and interdisciplinary approaches to deception detection, as well as approaches that utilize a single modality with clear potential for integration with additional modalities. Deception detection has received an increasing amount of attention due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proven to falsely accuse the innocent and free the guilty in multiple cases. More recent work on deception has expanded to other applications, such as deception detection in social media, interviews, or deception in day-by-day interactions. Moreover, recent research on deception detection has brought together scientists from fields as diverse as computational linguistics, speech processing, computer vision, psychology, and physiology, which makes this problem particularly appealing for multimodal processing. The goal of this workshop is to provide the participants with a forum to foster the dissemination of ideas on computational and behavioral methodologies for deception detection.

We invite the submission of long (8 pages) and short (4 pages) papers. We encourage the submission of papers that address the multimodal perspective of deception detection, as well as papers that use clues from a single modality but with the clear potential of being integrated with additional modalities. We also encourage the submission of interdisciplinary work stemming from a variety of fields such as computational linguistics, speech processing, computer vision, psychology, physiology, and others.

CALL FOR PAPERS

Please visit our website for more information:
http://lit.eecs.umich.edu/icmi-workshop/

1st joint workshop on Emotion Representations and Modelling for Companion Systems (ERM4CT 2015)

Organizers: Kim Hartmann, Ingo Siegert, Björn Schuller, Louis-Philippe Morency, Ali Albert Salah, Ronald Boeck

The major goal in human computer interaction (HCI) research and applications is to improve the interaction between humans and computers. As the user's behaviour is often very specific to an individual and generally of multi-modal nature, the current trend of multi-modal user-adaptable HCI systems arose over the past years. These systems are designed as companions capable of assisting their users based on the users' needs, preferences, personality and affective state. However, the adequate incorporation of emotions in these modern HCI systems has proved to be a challenging task. Depending on the modalities used, the user model incorporated and the application scenario, varying difficulties in the emotion representations and modelling may arise.

The ERM4CT workshop is a joint-workshop of the 3rd "Emotion Representations and Modelling for HCI Systems" (ERM4HCI - http://erm4hci.kognitivesysteme.de/) and the 2nd "Techniques towards Companion Technologies (T2CT - http://t2ct.kognitivesysteme.de/) workshop. The ERM4CT workshop aims at highlighting the specific issues associated with the multi-modal emotion representations as needed in companion technologies. This results in a special focus on emotion representations and signal characteristics that describe and identify emotions as well as their influence on personality and user state models to be incorporated in companion systems.

CALL FOR PAPERS

Please visit our website for more information:
http://erm4ct.cogsy.de

Developing portable & context-aware multimodal applications for connected devices using W3C Multimodal Architecture (Sponsored Workshop)

Organizers: Nagesh Kharidi, Raj Tumuluri

The W3C Multimodal Architecture is a standard for integrating components of a multimodal system into a smoothly coordinated application through the use of standard life-cycle events. SCXML, InkML, EMMA, JSON, etc are used within the Multimodal Architecture to manage various of modalities of interaction and represent the semantics of user inputs in a modality-independent fashion. Participants will learn how to use these standards to enhance existing applications with multimodality (e.g., typing, handwriting, speaking) using the Openstream's context-aware multimodal platform. Attendees can get hands-on with developing portable multimodal applications based on MMI Architecture.

Duration: 3 hours

Target Audience: Anyone interested in developing Portable Multimodal Applications for Mobile and Robotic Systems

Attendee laptop Pre-requisites:
Windows: Laptop running Windows 7 or higher, with 32-bit Java JRE 6 or higher
Mac: Mac running OS X 10.8.x or higher, with 64-bit Java JRE 6

Attendees will be provided Eclipse based toolset for development during the session on USB drives.

Biography:
Nagesh Kharidi, Technical Director at Openstream Inc, is a core member of the Openstream's Cue-me context-aware multimodal platform & Enterprise Virtual Assistant (EVA) Platform and is a member of the W3C Multimodal Interaction working group and co-author of EMMA2.0

Raj Tumuluri
Principal of Context-aware Multimodal Technologies, OpenStream Inc
Co-author W3C Multimodal Interaction Architecture

ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2017