ICMI 2013 Program

Important note: the program is still subject to change at this stage!

Monday, 9 December 2013 Location: NICTA ATP Research Laboratory

Multimodal Grand Challenges and Doctoral Consortium. Details TBA

Tuesday, 10 December 2013 Location: Coogee Bay Hotel



Julien Epps


Keynote 1: Behavior Imaging and the Study of Autism

James M. Rehg




Oral Session 1: Personality


On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions

Ramanathan Subramanian, Yan Yan, Jacopo Staiano, Oswald Lanz, Nicu Sebe


One of a Kind: Inferring Personality Impressions in Meetings

Oya Aran, Daniel Gatica-Perez


Who is Persuasive? The Role of Perceived Personality and Communication Modality in Social Multimedia

Gelareh Mohammadi, Sunghyun Park, Kenji Sagae, Alessandro Vinciarelli, Louis-Philippe Morency


Going Beyond Traits: Multimodal Classification Of Personality States In The Wild

Kyriaki Kalimeri, Bruno Lepri, Fabio Pianesi



Sponsored lunch, included with registration


Oral Session 2: Communication


Implementation and Evaluation of Multimodal Addressee Identification Mechanism for Multiparty Conversation Systems

Yukiko Nakano, Naoya Baba, Hung-Hsuan Huang, Yuki Hayashi


Managing Chaos: Models of Turn-taking in Character-multichild Interactions

Iolanda Leite, Hannaneh Hajishirzi, Sean Andrist, Jill Lehman


Speaker-Adaptive Multimodal Prediction Model for Listener Responses

Iwan de Kok, Dirk Heylen, Louis-Philippe Morency


User experiences of mobile audio conferencing with spatial audio, haptics and gestures

Jussi Rantala, Katja Suhonen, Sebastian Müller, Kaisa Väänänen-Vainio-Mattila, Vuokko Lantz, Roope Raisamo




Demo Session

A Framework for Multimodal Data Collection, Visualization, Annotation and Learning

Anne Loomis Thompson, Dan Bohus

Demonstration of Sketch-Thru-Plan: A Multimodal Interface for Command and Control

Phil Cohen, Cecelia Buchanan, Ed Kaiser, Michael Corrigan, Scott Lind, Matt Wesson

Robotic Learning Companions for Early Language Development

Jacqueline Kory, Sooyeon Jeong, Cynthia Breazeal

WikiTalk Human-Robot Interactions

Graham Wilcock, Kristiina Jokinen


Poster Session

Saliency-Guided 3D Head Pose Estimation on 3D Expression Models

Peng Liu, Michael Reale, Xing Zhang, Lijun Yin

Predicting Next Speaker and Timing from Gaze Transition Patterns in Multi-Party Meetings

Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato

A Semi-Automated System for Accurate Gaze Coding in Natural Dyadic Interactions

Kenneth Alberto Funes Mora, Laurent Nguyen, Daniel Gatica-Perez, Jean-Marc Odobez

Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfaces

Nanxiang Li, Carlos Busso

A Gaze-based Method for Relating Group Involvement to Individual Engagement in Multimodal Multiparty Dialogue

Catharine Oertel, Giampiero Salvi

Leveraging the Robot Dialog State for Visual Focus of Attention Recognition

Samira Sheikhi, Vasil Khalidov, David Klotz, Britta Wrede, Jean-Marc Odobez

CoWME: A general framework to evaluate cognitive workload during multimodal interaction

Davide Calandra, Antonio Caso, Francesco Cutugno, Antonio Origlia, Silvia Rossi

Hi YouTube! Personality Impressions and Verbal Content in Social Video

Joan-Isaac Biel, Vagia Tsiminaki, John Dines, Daniel Gatica-Perez

Cross-Domain Personality Prediction: From Video Blogs to Small Group Meetings

Oya Aran, Daniel Gatica-Perez

Automatic Detection of Deceit in Verbal Communication

Rada Mihalcea, Veronica Perez-Rosas, Mihai Burzo

Audiovisual Behavior Descriptors for Depression Assessment

Stefan Scherer, Giota Stratou, Louis-Philippe Morency

A Markov Logic Framework for Recognizing Complex Events from Multimodal Data

Young Chol Song, Henry Kautz, James Allen, Mary Swift, Yuncheng Li, Jiebo Luo, Ce Zhang

Interactive Relevance Search and Modeling: Support for Expert-Driven Analysis of Multimodal Data

Chreston Miller, Francis Quek, Louis-Philippe Morency

Predicting Speech Overlaps from Speech Tokens and Co-occurring Body Behaviors in Dyadic Conversations

Costanza Navarretta

Interaction Analysis and Joint Attention Tracking In Augmented Reality

Alexander Neumann, Christian Schnier, Thomas Hermann, Karola Pitsch

Mo!Games: Evaluating Mobile Gestures in the Wild

Julie Williamson, Rama Vennelikanti, Stephen Brewster

Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent

Benjamin Inden, Zofia Malisz, Petra Wagner, Ipke Wachsmuth

Video Analysis of Approach-Avoidance Behaviors of Teenagers Speaking with Virtual Agents

David Antonio Gómez Jáuregui, Léonor Philip, Céline Clavel, Stéphane Padovani, Mahin Bailly, Jean-Claude Martin

A Dialogue System for Multimodal Human-Robot Interaction

Lorenzo Lucignano, Francesco Cutugno, Silvia Rossi, Alberto Finzi

The Zigzag Paradigm: A new P300-based Brain Computer Interface

Qasem Obeidat, Tom Campbell, Jun Kong

SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Input

Lode Hoste, Beat Signer


Oral Session 3: Intelligent & Multimodal Interfaces


Interfaces for thinkers: Computer input capabilities that support inferential reasoning

Sharon Oviatt


Adaptive Timeline Interface to Personal History Data

Antti Ajanki, Markus Koskela, Jorma Laaksonen, Samuel Kaski


Learning a Sparse Codebook of Facial and Body Microexpressions for Emotion Recognition

Yale Song, Louis-Philippe Morency, Randall Davis


Welcome Reception

Reception finishes at 20:30.

Wednesday, 11 December 2013 Location: Coogee Bay Hotel


Keynote 2: Giving Interaction a Hand – Deep Models of Co-speech Gesture in Multimodal Systems

Stefan Kopp




Oral Session 4: Embodied Interfaces


Five Key Challenges in End-User Development for Tangible and Embodied Interaction

Daniel Tetteroo, Iris Soute, Panos Markopoulos


How Can I Help You? Comparing Engagement Classification Strategies for a Robot Bartender

Mary Ellen Foster, Andre Gaschler, Manuel Giuliani


Comparing Task-based and Socially Intelligent Behaviour in a Robot Bartender

Manuel Giuliani, Ron Petrick, Mary Ellen Foster, Andre Gaschler, Amy Isard, Maria Pateraki, Markos Sigalas


A dynamic multimodal approach for assessing learners’ interaction experience

Imène Jraidi, Maher Chaouachi, Claude Frasson



Sponsored lunch, included with registration


Oral Session 5: Hand & Body


Relative Accuracy Measures for Stroke Gestures

Radu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock


LensGesture: Augmenting Mobile Interactions with Back-of-Device Finger Gestures

Xiang Xiao, Teng Han, Jingtao Wang


Aiding Human Discovery of Handwriting Recognition Errors

Ryan Stedman, Michael Terry, Edward Lank


Context based Conversational Hand Gesture Classification in Narrative Interaction

Shogo Okada, Mayumi Bono, Katsuya Takanashi, Yasuyuki Sumi, Katsumi Nitta




Demo Session

A Haptic Touchscreen Interface for Mobile Devices

Jong-Uk Lee, Jeong-Mook Lim, Heesook Shin, Ki-Uk Kyung

A Social Interaction System for Studying Humor with the Robot NAO

Laurence Devillers, Mariette Soury

TASST: Affective Mediated Touch

Aduén Frederiks, Dirk Heylen, Gijs Huisman

Talk ROILA to your Robot

Omar Mubin, Joshua Henderson, Christoph Bartneck

NEMOHIFI: An Affective HiFi Agent

Syaheerah Lebai Lutfi, Fernando Fernandez-Martinez, Jaime Lorenzo-Trueba, Roberto Barra-Chicote, Juan Manuel Montero


Doctoral Spotlight Session

Persuasiveness in Social Multimedia: The Role of Communication Modality and the Challenge of Crowdsourcing Annotations

Sunghyun Park

Towards a Dynamic View of Personality: Multimodal Classification Personality States in Everyday Situations

Kyriaki Kalimeri

Designing Effective Multimodal Behaviors for Robots: A Data-Driven Perspective

Chien-Ming Huang

Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots

Sean Andrist

The Nature of the Bots: How People Respond to Robots, Virtual Agents and Humans as Multimodal Stimuli

Jamy Li

Adaptive Virtual Rapport for Embodied Conversational Agents

Ivan Gris Sepulveda

3D head pose and gaze tracking and their application to diverse multimodal tasks

Kenneth Alberto Funes Mora

Towards Developing a Model for Group Involvement And Individual Engagement

Catharine Oertel

Gesture Recognition Using Depth Images

Bin Liang

Modeling Semantic Aspects of Gaze Behavior while Catalog Browsing

Erina Ishikawa

Computational Behaviour Modelling for Autism Diagnosis

Shyam Sundar Rajagopalan


Grand Challenge Overviews

ChaLearn Challenge and Workshop on Multi-modal Gesture Recognition

Emotion Recognition In The Wild Challenge and Workshop (EmotiW)

Multimodal Learning Analytics (MMLA)



Buses will be leaving shortly after 18:00 (exact time TBA) and be back at 22:00

Thursday, 12 December 2013 Location: Coogee Bay Hotel


Keynote 3: Hands and Speech in Space: Multimodal Interaction with Augmented Reality interfaces

Mark Billinghurst




Oral Session 6: AR, VR & Mobile


Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality: Device vs. User Perspective Rendering

Klen Copic Pucihar, Paul Coulton, Jason Alexander


MM+Space: n x 4 Degree-of-Freedom Kinetic Display for Recreating Multiparty Conversation Spaces

Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato


Investigating Appropriate Spatial Relationship between User and AR Character Agent for Communication Using AR WoZ System

Reina Aramaki, Makoto Murakami


Inferring Social Activities with Mobile Sensor Networks

Trinh Minh Tri Do, Kyriaki Kalimeri, Bruno Lepri, Fabio Pianesi, Daniel Gatica-Perez



Sponsored lunch, included with registration


Oral Session 7: Eyes & Body


Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaboration

Ichiro Umata, Seiichi Yamamoto, Koki Ijuin, Masafumi Nishida


Predicting Where We Look from Spatiotemporal Gaps

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama


Automatic Multimodal Descriptors of Rhythmic Body Movement

Marwa Mahmoud, Louis-Philippe Morency, Peter Robinson


Multimodal Analysis of Body Communication Cues in Employment Interviews

Laurent Nguyen, Alvaro Marcos-Ramiro, Marta Marrón Romera, Daniel Gatica-Perez


ICMI Town Hall Meeting (open to all attendees)

Friday, 13 December 2013 Location: Coogee Bay Hotel


ICMI 2013 ACM International Conference on Multimodal Interaction. 9-13th December 2013, Sydney, Australia. Copyright © 2010-2023
Photo credits: David Iliff, Enoch Lau (license: CC-BY-SA 3.0). Destination NSW, Don Fuchs, Susan Wright, David Druce.