ICMI 2015 Conference Program (tentative)

Monday, 09 November 2015

Doctoral Consortium 

Program available as pdf here

Recognition of Social Touch Gestures Challenge 2015

Session Chair: Merel Jung, Mannes Poel, Karon MacLean, and Laura Cang

Program available as pdf here

Emotion Recognition in the Wild Challenge 2015 

Session Chair: Abhinav Dhall, Roland Goecke, Jyoti Joshi, and Tom Gedeon

Program available as pdf here

Multimodal Learning and Analytics Grand Challenge 2015 

Session Chair: Katherine Chiluiza, Joseph Grafsgaard, Xavier Ochoa, and Marcelo Worsley

Program available as pdf here

19:00-21:00

Welcome Reception 

Tuesday, 10 November 2015

Nominees of Outstanding Paper Award and Outstanding Student Paper Award are marked with a star (*)

09:00-09:15

Welcome
Zhengyou Zhang and Phil Cohen

09:15-10:15

Keynote 1: Sharing Representations for Long Tail Computer Vision Problems

Dr. Samy Bengio 

Keynote Chair: Zhengyou Zhang

10:15-10:45

Break

10:45-12:15

Oral Session 1: Machine Learning in Multimodal Systems
Session Chair: Radu Horaud

10:45

*Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker Traits

Moitreya Chatterjee, Sunghyun Park, Louis-Philippe Morency, Stefan Scherer

11:10

Personality Trait Classification via Co-occurrent Multiparty Multimodal Event Discovery

Shogo Okada, Oya Aran, Daniel Gatica-Perez

11:35

Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring

Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng, David Suendermann

12:00

Gender Representation in Cinematic Content: A Multimodal Approach
Tanaya Guha, Che-Wei Huang, Naveen Kumar, Yan Zhu, Shrikanth Narayanan

12:15-14:00

Lunch

14:00-15:40

Oral Session 2: Audio-Visual, Multimodal Inference
Session Chair: Yukiko Nakano

14:00

Effects of Good Speaking Techniques on Audience Engagement
Keith Curtis, Gareth J. F. Jones, Nick Campbell

14:25

Multimodal Public Speaking Performance Assessment
Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, Stefan Scherer

14:50

I would hire you in a minute: Thin slices of nonverbal behavior in job interviews

Laurent Nguyen, Daniel Gatica-Perez

15:15

Deception Detection using Real-life Trial Data
Veronica Perez-Rosas, Mohamed Abouelenien, Rada Mihalcea, Mihai Burzo

15:40-16:00

Break

16:00-18:00

Poster Session  
Session Chairs: Radu Horaud and Dan Bohus

ECA Control using a Single Affective User Dimension
Fred Charles, Florian Pecune, Gabor Aranyi, Catherine Pelachaud, Marc Cavazza

 

Multimodal Interaction with a Bifocal View on Mobile Devices

Sebastien Pelurson, LAURENCE NIGAY

 

NaLMC - a Database on Non-acted and Acted Emotional Sequences in HCI

Kim Hartmann, Julia Krüger, Jörg Frommer, Andreas Wendemuth

 

Exploiting Multimodal Affect and Semantics to Identify Politically Persuasive Web Videos

Behjat Siddiquie, David Chisholm, Ajay Divakaran

 

Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children

Samer Al Moubayed, Jill Lehman

 

Gestimator - Shape and Stroke Similarity Based Gesture Recognition

Yina Ye, Petteri Nurmi

 

Classification of Children’s Social Dominance in Group Interactions with Robots

Sarah Strohkorb, Iolanda Leite, Natalie Warren, Brian Scassellati

 

Spectators' Synchronization Detection based on Manifold Representation of Physiological Signals: Application to Movie Highlights Detection

Michal Muszynski, Theodoros Kostoulas, Guillaume Chanel, Patrizia Lombardo, Thierry Pun

 

Implicit User-centric Personality Recognition Based on Physiological Responses to Emotional Videos

Julia Wache, Ramanathan Subramanian, Mojtaba Khomami Abadi, Radu L. Vieriu, Nicu Sebe, Stefan Winkler

 

Detecting Mastication - A Wearable Approach

Abdelkareem Bedri, Apoorva Verlekar, Edison Thomaz, Valerie Avva, Thad Starner

Exploring Behavior Representation for Learning Analytics
Marcelo Worsley, Stefan Scherer, Louis-Philippe Morency, Paulo Blikstein

 

Multimodal human activity recognition for industrial manufacturing processes in robotic workcells

Alina Roitberg, Nikhil Somani, Alexander Perzylo, Markus Rickert, Alois Knoll

 

Accuracy vs. Availability Heuristic in Multimodal Affect Detection in the Wild

Nigel Bosch, Huili Chen, Sidney D'Mello, Ryan Baker, Valerie Shute

Dynamic Active Learning Based on Agreement and Applied to Emotion Recognition in Spoken Interactions

Yue Zhang, Eduardo Coutinho, Zixing Zhang, Caijiao Quan, Bjoern Schuller

 

Sharing Touch Interfaces: Proximity-Sensitive Touch Targets for Tablet-Mediated Collaboration

Ilhan Aslan, Thomas Meneweger, Verena Fuchsberger, Manfred Tscheligi

 

Analyzing Multimodality of Video for User Engagement Assessment
Fahim A. Salim, Fasih Haider, Owen Conlan, Saturnino Luz, Nick Campbell

 

Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement Unit

Asif Iqbal, Carlos Busso, Nicholas R. Gans

 

Automatic Detection of Mind Wandering During Reading Using Gaze and Physiology

Robert Bixler, Nathaniel Blanchard, Luke Garrison, Sidney D'Mello

 

Multimodal Detection of Depression in Clinical Interviews
Hamdi Dibeklioglu, Zakia Hammal, Ying Yang, Jeffrey Cohn

 

Spoken Interruptions Signal Productive Problem Solving  and Domain Expertise in Mathematics

Sharon Oviatt, Kevin Hang, Jianlong Zhou, Fang Chen

 

Active Haptic Feedback for Touch Enabled TV Remote
Anton Treskunov, Mike Darnell, Rongrong Wang

 

A visual analytics approach to finding factors improving automatic speaker identifications

Pierrick Bruneau, Mickaël Stefas, Hervé Bredin, Johann Poignant, Thomas Tamisier, Claude Barras

 

The Influence of Visual Cues on Passive Tactile Sensations in a Multimodal Immersive Virtual Environment

Nina Rosa, Wolfgang Hürst, Wouter Vos, Peter Werkhoven

 

Detection of deception in the Mafia party game
Sergey Demyanov, James Bailey, Ramamohanarao Kotagiri, Christopher Leckie

 

Individuality-Preserving Voice Reconstruction for Articulation Disorders Using Text-to-Speech Synthesis

Reina Ueda, Tetsuya Takiguchi, Yasuo Ariki

 

Behavioral and emotional spoken cues related to mental states in Human-Robot social interaction

Lucile Bechade, Guillaume Dubuisson Duplessis, Mohamed Sehili, Laurence Devillers

 

Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View 

Sven Bambach, David Crandall, Chen Yu

 

A Multimodal System for Real-Time Action Instruction in Motor Skill Learning

Iwan de Kok, Julian Hough, Felix Hülsmann, Mario Botsch, David Schlangen, Stefan Kopp

Wednesday, 11 November 2015

9:00-10:00

Keynote 2: Interaction Studies with Social Robots

Prof. Kerstin Dautenhahn

Keynote Chair: Phil Cohen

10:00-10:30

Break

10:30-11:50

Oral Session 3: Language, Speech and Dialog
Session Chair: Jill Lehman

10:30

*Exploring turn-taking cues in multi-party human-robot discussions about objects

Gabriel Skantze, Martin Johansson, Jonas Beskow

10:55

*Visual Saliency and Crowdsourcing-based Priors for an In-car Situated Dialog System

Teruhisa Misu

11:20

Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language Understanding

Yun-Nung Chen, Ming Sun, Alexander Rudnicky, Anatole Gershman

11:35

Who's Speaking? Audio-Supervised Classification of Active Speakers in Video

Punarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, Hugo Van hamme

11:50-13:30

Lunch

13:30-15:10

Oral Session 4: Communication Dynamics 
Session Chair: Louis-Phillippe Morency

13:30

Predicting Participation Styles using Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning

Yukiko Nakano, Sakiko Nihonyanagi, Yutaka Takase, Yuki Hayashi, Shogo Okada

13:55

Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings 

Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka

14:20

*Deciphering the Silent Participant: On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group Discussions

Catharine Oertel, Kenneth Alberto Funes Mora, Joakim Gustafson, Jean-Marc Odobez

14:45

Retrieving target gestures toward speech driven animation with meaningful behaviors  

Najmeh Sadoughi, Carlos Busso

15:10-15:45

Break

15:45-16:45

Doctoral Consortium Posters
Session Chair: Carlos Busso

Temporal Association Rules for modelling multimodal social signals
Thomas Janssoone

Detecting and Synthesizing Synchronous Joint Action in Human-Robot Teams

Tariq Iqbal

Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos

Amir Zadeh

Attention and Engagement Aware Multimodal Conversational Systems

Zhou Yu

 

Implicit Human-computer Interaction: Two Complementary Approaches

Julia Wache

 

Instantaneous and Robust Eye-Activity Based Task Analysis
Hoe Kin Wong

 

Challenges in Deep Learning for Multimodal Applications
Sayan Ghosh

 

Exploring Intent-driven Multimodal Interface for Geographical Information System

Feng Sun

 

Software Techniques for Multimodal Input Processing in Realtime Interactive Systems

Martin Fischbach

 

Gait and Postural Sway Analysis, A Multi-Modal System
Hafsa Ismail

 

A Computational Model of Culture-Specific Emotions for Artificial Agents in the Learning Domain

Ganapreeta Naidu

 

Record, Transform & Reproduce Social Encounters in Immersive VR: An Iterative Approach

Jan Kolkmeier

 

Multimodal Affect Detection in the Wild: Accuracy, Availability, and Generalizability

Nigel Bosch

 

Multimodal assessment of Teaching Behavior in Immersive Rehearsal Environment – TeachLivE

Roghayeh Barmaki

15:45-18:00

Demos 
Session Chair: Stefan Scherer

The Application of Word Processor UI paradigms to  Audio and Animation Editing

Andre Milota

 

CuddleBits: Friendly Low-cost Furballs that Respond to Your Touch

Xi Laura Cang, Paul Bucci, Karon MacLean

 

Public Speaking Training with a Multimodal Interactive Virtual Audience Framework - Demonstration

Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer

A Multimodal System for Public Speaking with Real Time Feedback

Fiona Dermody, Alistair Sutherland

Model of Personality-Based, Nonverbal Behavior in Affective Virtual Humanoid Character

Maryam Saberi, Ulysses Bernardet, Steve DiPaola

AttentiveLearner: Adaptive Mobile MOOC Learning via Implicit Cognitive States Inference

Xiang Xiao, Phuong Pham, Jingtao Wang

Interactive Web-based Image Sonification for the Blind

Torsten Wörtwein, Boris Schauerte, Karin Müller, Rainer Stiefelhagen

Nakama: A Companion for Non-verbal Affective Communication

Christian Willemse, Gerald Munters, Jan van Erp, Dirk Heylen

Wir im Kiez - Multimodal App for Mutual Help Among Elderly Neighbours

Sven Schmeier, Aaron Russ, Norbert Reithinger

Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant

Ethan Selfridge, Michael Johnston

The UTEP AGENT Framework

David Novick, Ivan Gris Sepulveda, Diego Rivera, Adriana Camacho, Alex Rayon, Mario Gutierrez

A Distributed Architecture for Interacting with NAO

Fabien Badeig, Quentin Pelorson, Soraya Arias, Vincent Drouard, Israel Gebru, Xiaofei Li, Georgios Evangelidis, Radu Horaud

Who's Speaking? Audio-Supervised Classification of Active Speakers in Video

Punarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, Hugo Van hamme

Multimodal Interaction with a Bifocal View on Mobile Devices

Sebastien Pelurson, Laurence Nigay

Digital Flavor: Towards Digitally Simulating Virtual Flavors

Nimesha Ranasinghe, Gajan Suthokumar, Kuan-Yi Lee, Ellen Yi-Luen Do

Detecting Mastication - A Wearable Approach

Abdelkareem Bedri, Apoorva Verlekar, Edison Thomaz, Valerie Avva, Thad Starner

Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement Unit

Asif Iqbal, Carlos Busso Recabarren, Nicholas R. Gans

19:00

Banquet 

Thursday, 12 November 2015

09:00-10:00

Keynote 3: Sustained Accomplishment Award Talk

Dr. Eric Horvitz
Keynote Chair: Daniel-Gatica Perez

10:00-10:30

Break

10:30-12:10

Oral Session 5: Interaction Techniques
Session Chair: Sharon Oviatt

10:30

Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input

Konstantin Klamka, Andreas Siegel, Stefan Vogt, Fabian Göbel, Sophie Stellmach, Raimund Dachselt

10:55

*Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions

Ishan Chatterjee, Robert Xiao, Chris Harrison

11:20

Digital Flavor: Towards Digitally Simulating Virtual Flavors
Nimesha Ranasinghe, Gajan Suthokumar, Kuan-Yi Lee, Ellen Yi-Luen Do

11:45

Different Strokes and Different Folks: Economical Dynamic Surface Sensing and Affect-related Touch Recognition 

Xi Laura Cang, Paul Bucci, Andrew Strang, Jeff Allen, Karon MacLean, Hong Yue Sean Liu

12:10-13:30

Lunch

13:30-13:45

Grand Challenges - Overviews

Cosmin Munteanu/Marcelo Worsley

Recognition of Social Touch Gestures Challenge 2015

Session Chair: Merel Jung, Mannes Poel, Karon MacLean, and Laura Cang

Program available as pdf here

Emotion Recognition in the Wild Challenge 2015 

Session Chair: Abhinav Dhall, Roland Goecke, Jyoti Joshi, and Tom Gedeon

Program available as pdf here

Multimodal Learning and Analytics Grand Challenge 2015 

Session Chair: Katherine Chiluiza, Joseph Grafsgaard, Xavier Ochoa, and Marcelo Worsley

Program available as pdf here

13:45-15:00

Grand Challenges - Posters
Session Chair: Cosmin Munteanu and Marcelo Worsley

15:00-15:30

Break

15:30-17:00

Oral Session 6: Mobile and Wearable
Session Chair: Michael Johnston

15:30

MPHA: A Personal Hearing Doctor Based on Mobile Devices
Yuhao Wu, Jia Jia, WaiKim Leung, Yejun Liu, Lianhong Cai

15:55

*Towards Attentive, Bi-directional MOOC Learning on Mobile Devices
Xiang Xiao, Jingtao Wang

16:20

An Experiment on the Feasibility of Spatial Acquisition using a Moving Auditory Cue for Pedestrian Navigation

Yeseul Park, Kyle Koh, Heonjin Park, Jinwook Seo

16:35

A Wearable Multimodal Interface for Exploring Urban Points of Interest

Antti Jylhä, Yi-Ta Hsieh, Valeria Orso, Salvatore Andolina, Luciano Gamberini, Giulio Jacucci

17:00-18:00

ICMI Town Hall Meeting

Friday, 13 November 2015

08:45-17:00

1st International Workshop on Advancements in Social Signal Processing for Multimodal Interaction

Chairs: Khiet Truong, Dirk Heylen, Mohamed Chetouani, Bilge Mutlu, Albert Ali Salah

Program available here

Place: Seattle Ballroom 1

08:45-17:00

1st Workshop on Modeling INTERPERsonal SynchrONy And infLuence INTERPERSONAL

Chairs: Mohamed Chetouani, Giovanna Varni, Hanan Salam, Zakia Hammal, Jeffrey F. Cohn

Program available as pdf here

Place: Pioneer

09:00-16:00

Workshop on Multimodal Deception Detection

Chairs: Mohamed Abouelenien, Mihai Burzo, Rada Mihalcea, Veronica Perez-Rosas

Program available as pdf here

Place: Seattle Ballroom 3

09:00-17:00

3rd International Workshop on Emotion representations and modelling for Companion Technologies (ERM4CT 2015)

Chairs: Kim Hartmann, Ingo Siegert, Björn Schuller, Louis-Philippe Morency, Ali Albert Salah, Ronald Boeck

Program available as pdf here

Place: Belltown

09:00-12:00

Developing portable & context-aware multimodal applications for connected devices using W3C Multimodal Architecture

Chairs: Nagesh Kharidi, Raj Tumuluri

Place: Seattle Ballroom 2

 

ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2017