ICMI 2013 Program

Important note: the program is still subject to change at this stage!

Monday, 9 December 2013 Location: NICTA ATP Research Laboratory

Multimodal Grand Challenges and Doctoral Consortium. Details TBA

Tuesday, 10 December 2013 Location: Coogee Bay Hotel

09:00

Welcome

Julien Epps

09:15

Keynote 1: Behavior Imaging and the Study of Autism

James M. Rehg

10:15

Break

10:35-12:15

Oral Session 1: Personality

10:35

On the relationship between head pose, social attention and personality prediction for unstructured and dynamic group interactions

Ramanathan Subramanian, Yan Yan, Jacopo Staiano, Oswald Lanz, Nicu Sebe

11:00

One of a Kind: Inferring Personality Impressions in Meetings

Oya Aran, Daniel Gatica-Perez

11:25

Who is Persuasive? The Role of Perceived Personality and Communication Modality in Social Multimedia

Gelareh Mohammadi, Sunghyun Park, Kenji Sagae, Alessandro Vinciarelli, Louis-Philippe Morency

11:50

Going Beyond Traits: Multimodal Classification Of Personality States In The Wild

Kyriaki Kalimeri, Bruno Lepri, Fabio Pianesi

12:15

Lunch

Sponsored lunch, included with registration

13:35-15:15

Oral Session 2: Communication

13:35

Implementation and Evaluation of Multimodal Addressee Identification Mechanism for Multiparty Conversation Systems

Yukiko Nakano, Naoya Baba, Hung-Hsuan Huang, Yuki Hayashi

14:00

Managing Chaos: Models of Turn-taking in Character-multichild Interactions

Iolanda Leite, Hannaneh Hajishirzi, Sean Andrist, Jill Lehman

14:25

Speaker-Adaptive Multimodal Prediction Model for Listener Responses

Iwan de Kok, Dirk Heylen, Louis-Philippe Morency

14:50

User experiences of mobile audio conferencing with spatial audio, haptics and gestures

Jussi Rantala, Katja Suhonen, Sebastian Müller, Kaisa Väänänen-Vainio-Mattila, Vuokko Lantz, Roope Raisamo

15:15

Break

15:30-17:00

Demo Session

A Framework for Multimodal Data Collection, Visualization, Annotation and Learning

Anne Loomis Thompson, Dan Bohus

Demonstration of Sketch-Thru-Plan: A Multimodal Interface for Command and Control

Phil Cohen, Cecelia Buchanan, Ed Kaiser, Michael Corrigan, Scott Lind, Matt Wesson

Robotic Learning Companions for Early Language Development

Jacqueline Kory, Sooyeon Jeong, Cynthia Breazeal

WikiTalk Human-Robot Interactions

Graham Wilcock, Kristiina Jokinen

15:30-17:00

Poster Session

Saliency-Guided 3D Head Pose Estimation on 3D Expression Models

Peng Liu, Michael Reale, Xing Zhang, Lijun Yin

Predicting Next Speaker and Timing from Gaze Transition Patterns in Multi-Party Meetings

Ryo Ishii, Kazuhiro Otsuka, Shiro Kumano, Masafumi Matsuda, Junji Yamato

A Semi-Automated System for Accurate Gaze Coding in Natural Dyadic Interactions

Kenneth Alberto Funes Mora, Laurent Nguyen, Daniel Gatica-Perez, Jean-Marc Odobez

Evaluating the robustness of an appearance-based gaze estimation method for multimodal interfaces

Nanxiang Li, Carlos Busso

A Gaze-based Method for Relating Group Involvement to Individual Engagement in Multimodal Multiparty Dialogue

Catharine Oertel, Giampiero Salvi

Leveraging the Robot Dialog State for Visual Focus of Attention Recognition

Samira Sheikhi, Vasil Khalidov, David Klotz, Britta Wrede, Jean-Marc Odobez

CoWME: A general framework to evaluate cognitive workload during multimodal interaction

Davide Calandra, Antonio Caso, Francesco Cutugno, Antonio Origlia, Silvia Rossi

Hi YouTube! Personality Impressions and Verbal Content in Social Video

Joan-Isaac Biel, Vagia Tsiminaki, John Dines, Daniel Gatica-Perez

Cross-Domain Personality Prediction: From Video Blogs to Small Group Meetings

Oya Aran, Daniel Gatica-Perez

Automatic Detection of Deceit in Verbal Communication

Rada Mihalcea, Veronica Perez-Rosas, Mihai Burzo

Audiovisual Behavior Descriptors for Depression Assessment

Stefan Scherer, Giota Stratou, Louis-Philippe Morency

A Markov Logic Framework for Recognizing Complex Events from Multimodal Data

Young Chol Song, Henry Kautz, James Allen, Mary Swift, Yuncheng Li, Jiebo Luo, Ce Zhang

Interactive Relevance Search and Modeling: Support for Expert-Driven Analysis of Multimodal Data

Chreston Miller, Francis Quek, Louis-Philippe Morency

Predicting Speech Overlaps from Speech Tokens and Co-occurring Body Behaviors in Dyadic Conversations

Costanza Navarretta

Interaction Analysis and Joint Attention Tracking In Augmented Reality

Alexander Neumann, Christian Schnier, Thomas Hermann, Karola Pitsch

Mo!Games: Evaluating Mobile Gestures in the Wild

Julie Williamson, Rama Vennelikanti, Stephen Brewster

Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent

Benjamin Inden, Zofia Malisz, Petra Wagner, Ipke Wachsmuth

Video Analysis of Approach-Avoidance Behaviors of Teenagers Speaking with Virtual Agents

David Antonio Gómez Jáuregui, Léonor Philip, Céline Clavel, Stéphane Padovani, Mahin Bailly, Jean-Claude Martin

A Dialogue System for Multimodal Human-Robot Interaction

Lorenzo Lucignano, Francesco Cutugno, Silvia Rossi, Alberto Finzi

The Zigzag Paradigm: A new P300-based Brain Computer Interface

Qasem Obeidat, Tom Campbell, Jun Kong

SpeeG2: A Speech- and Gesture-based Interface for Efficient Controller-free Text Input

Lode Hoste, Beat Signer

17:00-18:15

Oral Session 3: Intelligent & Multimodal Interfaces

17:00

Interfaces for thinkers: Computer input capabilities that support inferential reasoning

Sharon Oviatt

17:25

Adaptive Timeline Interface to Personal History Data

Antti Ajanki, Markus Koskela, Jorma Laaksonen, Samuel Kaski

17:50

Learning a Sparse Codebook of Facial and Body Microexpressions for Emotion Recognition

Yale Song, Louis-Philippe Morency, Randall Davis

18:30

Welcome Reception

Reception finishes at 20:30.

Wednesday, 11 December 2013 Location: Coogee Bay Hotel

9:00

Keynote 2: Giving Interaction a Hand – Deep Models of Co-speech Gesture in Multimodal Systems

Stefan Kopp

10:00

Break

10:20-12:00

Oral Session 4: Embodied Interfaces

10:20

Five Key Challenges in End-User Development for Tangible and Embodied Interaction

Daniel Tetteroo, Iris Soute, Panos Markopoulos

10:45

How Can I Help You? Comparing Engagement Classification Strategies for a Robot Bartender

Mary Ellen Foster, Andre Gaschler, Manuel Giuliani

11:10

Comparing Task-based and Socially Intelligent Behaviour in a Robot Bartender

Manuel Giuliani, Ron Petrick, Mary Ellen Foster, Andre Gaschler, Amy Isard, Maria Pateraki, Markos Sigalas

11:35

A dynamic multimodal approach for assessing learners’ interaction experience

Imène Jraidi, Maher Chaouachi, Claude Frasson

12:00

Lunch

Sponsored lunch, included with registration

13:45-15:25

Oral Session 5: Hand & Body

13:45

Relative Accuracy Measures for Stroke Gestures

Radu-Daniel Vatavu, Lisa Anthony, Jacob O. Wobbrock

14:10

LensGesture: Augmenting Mobile Interactions with Back-of-Device Finger Gestures

Xiang Xiao, Teng Han, Jingtao Wang

14:35

Aiding Human Discovery of Handwriting Recognition Errors

Ryan Stedman, Michael Terry, Edward Lank

15:00

Context based Conversational Hand Gesture Classification in Narrative Interaction

Shogo Okada, Mayumi Bono, Katsuya Takanashi, Yasuyuki Sumi, Katsumi Nitta

15:25

Break

15:45-17:15

Demo Session

A Haptic Touchscreen Interface for Mobile Devices

Jong-Uk Lee, Jeong-Mook Lim, Heesook Shin, Ki-Uk Kyung

A Social Interaction System for Studying Humor with the Robot NAO

Laurence Devillers, Mariette Soury

TASST: Affective Mediated Touch

Aduén Frederiks, Dirk Heylen, Gijs Huisman

Talk ROILA to your Robot

Omar Mubin, Joshua Henderson, Christoph Bartneck

NEMOHIFI: An Affective HiFi Agent

Syaheerah Lebai Lutfi, Fernando Fernandez-Martinez, Jaime Lorenzo-Trueba, Roberto Barra-Chicote, Juan Manuel Montero

15:45-17:15

Doctoral Spotlight Session

Persuasiveness in Social Multimedia: The Role of Communication Modality and the Challenge of Crowdsourcing Annotations

Sunghyun Park

Towards a Dynamic View of Personality: Multimodal Classification Personality States in Everyday Situations

Kyriaki Kalimeri

Designing Effective Multimodal Behaviors for Robots: A Data-Driven Perspective

Chien-Ming Huang

Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots

Sean Andrist

The Nature of the Bots: How People Respond to Robots, Virtual Agents and Humans as Multimodal Stimuli

Jamy Li

Adaptive Virtual Rapport for Embodied Conversational Agents

Ivan Gris Sepulveda

3D head pose and gaze tracking and their application to diverse multimodal tasks

Kenneth Alberto Funes Mora

Towards Developing a Model for Group Involvement And Individual Engagement

Catharine Oertel

Gesture Recognition Using Depth Images

Bin Liang

Modeling Semantic Aspects of Gaze Behavior while Catalog Browsing

Erina Ishikawa

Computational Behaviour Modelling for Autism Diagnosis

Shyam Sundar Rajagopalan

17:15-18:00

Grand Challenge Overviews

ChaLearn Challenge and Workshop on Multi-modal Gesture Recognition

Emotion Recognition In The Wild Challenge and Workshop (EmotiW)

Multimodal Learning Analytics (MMLA)

18:30

Banquet

Buses will be leaving shortly after 18:00 (exact time TBA) and be back at 22:00

Thursday, 12 December 2013 Location: Coogee Bay Hotel

09:00

Keynote 3: Hands and Speech in Space: Multimodal Interaction with Augmented Reality interfaces

Mark Billinghurst

10:00

Break

10:30-12:10

Oral Session 6: AR, VR & Mobile

10:30

Evaluating Dual-view Perceptual Issues in Handheld Augmented Reality: Device vs. User Perspective Rendering

Klen Copic Pucihar, Paul Coulton, Jason Alexander

10:55

MM+Space: n x 4 Degree-of-Freedom Kinetic Display for Recreating Multiparty Conversation Spaces

Kazuhiro Otsuka, Shiro Kumano, Ryo Ishii, Maja Zbogar, Junji Yamato

11:20

Investigating Appropriate Spatial Relationship between User and AR Character Agent for Communication Using AR WoZ System

Reina Aramaki, Makoto Murakami

11:45

Inferring Social Activities with Mobile Sensor Networks

Trinh Minh Tri Do, Kyriaki Kalimeri, Bruno Lepri, Fabio Pianesi, Daniel Gatica-Perez

12:10

Lunch

Sponsored lunch, included with registration

14:00-15:45

Oral Session 7: Eyes & Body

14:00

Effects of language proficiency on eye-gaze in second language conversations: toward supporting second language collaboration

Ichiro Umata, Seiichi Yamamoto, Koki Ijuin, Masafumi Nishida

14:25

Predicting Where We Look from Spatiotemporal Gaps

Ryo Yonetani, Hiroaki Kawashima, Takashi Matsuyama

14:50

Automatic Multimodal Descriptors of Rhythmic Body Movement

Marwa Mahmoud, Louis-Philippe Morency, Peter Robinson

15:15

Multimodal Analysis of Body Communication Cues in Employment Interviews

Laurent Nguyen, Alvaro Marcos-Ramiro, Marta Marrón Romera, Daniel Gatica-Perez

15:40-17:00

ICMI Town Hall Meeting (open to all attendees)

Friday, 13 December 2013 Location: Coogee Bay Hotel

Workshops

ICMI 2013 ACM International Conference on Multimodal Interaction. 9-13th December 2013, Sydney, Australia. Copyright © 2010-2023
Photo credits: David Iliff, Enoch Lau (license: CC-BY-SA 3.0). Destination NSW, Don Fuchs, Susan Wright, David Druce.