ICMI 2016 Conference Program

Saturday, 12 November 2016 (Time 24 Bldg)

Registration desk will be open from 8:30 to 17:00 on the 18th floor in Time 24 Bldg.

9:00 - 12:30 Tutorial: Multimodal Machine Learning
Time24: Room 181
Dr. Louis-Philippe Morency
9:00 - 17:30 Doctoral Consortium
Time24: Room 182
Doctoral Consortium program
9:00 - 17:30 Grand Challenge
Time24: Room 183
Emotion Recognition in the Wild Challenge 2016

Sunday, 13 November 2016 (Miraikan)

Registration desk will be open from 8:30 to 17:00 on the 7th floor in Miraikan

All sessions will take place in the Miraikan Hall,
except for the Demos that will be in the Innovation Hall, and the Posters will be in the Conference Room 3
Nominees of Best Paper Award and Student Best Paper Award are marked with a star (*)

09:00 Welcome
Yukiko Nakano
09:15 Keynote 1: Understanding people by tracking their word use
Prof. James W. Pennebaker
Session Chair: Louis-Philippe Morency
10:15-10:45 Coffee Break
10:45-12:25 Oral Session1: Multimodal Social Agents
Session Chair: Elisabeth Andre (Augsburg University)
10:45 * Trust Me: Multimodal Signals of Trustworthiness
Gale Lucas, Giota Stratou, Shari Lieblich, and Jonathan Gratch
11:10 Semi-situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction
Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, and Jill Fain Lehman
11:35 Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Audio-Visual Feedback Tokens
Catharine Oertel, José Lopes, Yu Yu, Kenneth A. Funes Mora, Joakim Gustafson, Alan W. Black, and Jean-Marc Odobez
12:00 Sequence-Based Multimodal Behavior Modeling for Social Agents
Soumia Dermouche and Catherine Pelachaud
12:25-14:00 Lunch
Conference Room 3
14:00-15:30 Oral Session 2: Physiological and Tactile Modalities
Session Chair: Jonathan Gratch (University of Southern California)
14:00 *Adaptive Review for Mobile MOOC Learning via Implicit Physiological Signal Sensing
Phuong Pham and Jingtao Wang
14:25 *Visuotactile Integration for Depth Perception in Augmented Reality
Nina Rosa, Wolfgang Hürst, Peter Werkhoven, and Remco Veltkamp
14:50 Exploring Multimodal Biosignal Features for Stress Detection during Indoor Mobility
Kyriaki Kalimeri and Charalampos Saitis
15:15 An IDE for Multimodal Controls in Smart Buildings
Sebastian Peters, Jan Ole Johanssen, and Bernd Bruegge
15:30-16:00 Coffee Break
16:00-18:00 Poster Session 1
Session Chair: TBA
Personalized Unknown Word Detection in Non-native Language Reading using Eye Gaze
Rui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, and Sa- toshi Nakamura
Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired
Daniel McDuff
Do Speech Features for Detecting Cognitive Load Depend on Specific Languages?
Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, and Ningjiu Tang
Training on the Job: Behavioral Analysis of Job Interviews in Hospitality
Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, and Daniel Gatica-Perez
Emotion Spotting: Discovering Regions of Evidence in Audio-Visual Emotion Expressions
Yelin Kim and Emily Mower Provost
Semi-supervised Model Personalization for Improved Detection of Learner's Emotional Engagement
Nese Alyuz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, and Asli Arslan Esme
Driving Maneuver Prediction using Car Sensor and Driver Physiological Signals
Nanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda, Chihiro Suga, and Kikuo Fujimura
On Leveraging Crowdsourced Data for Automatic Perceived Stress Detection
Jonathan Aigrain, Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson, Marcin Detyniecki, and Mohamed Chetouani
Investigating the Impact of Automated Transcripts on Non-native Speakers' Listening Comprehension
Xun Cao, Naomi Yamashita, and Toru Ishida
Speaker Impact on Audience Comprehension for Academic Presentations
Keith Curtis, Gareth J. F. Jones, and Nick Campbell
EmoReact: A Multimodal Approach and Dataset for Recognizing Emotional Responses in Children
Behnaz Nojavanasghari, Tadas Baltrušaitis, Charles E. Hughes, and Louis-Philippe Morency
Bimanual Input for Multiscale Navigation with Pressure and Touch Gestures
Sebastien Pelurson and Laurence Nigay
Intervention-Free Selection using EEG and Eye Tracking
Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, and Tanja Schultz
Automated Scoring of Interview Videos using Doc2Vec Multimodal Feature Extraction Paradigm
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Mar- tin-Raugh, Harrison Kell, Chong Min Lee, and Su-Youn Yoon
Estimating Communication Skills using Dialogue Acts and Nonverbal Features in Multiple Discussion Datasets
Shogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, and Katsumi Nitta
Multi-Sensor Modeling of Teacher Instructional Segments in Live Classrooms
Patrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew M. Ol- ney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, and Sidney K. D'Mello
16:00-18:00 Demo Session 1
Session Chair: Ronald Poppe and Ryo Ishii
Social Signal Processing for Dummies
Ionut Damian, Michael Dietz, Frank Gaibler, and Elisabeth André
Metering "Black Holes": Networking Stand-Alone Applications for Distributed Multimodal Synchronization
Michael Cohen, Yousuke Nagayama, and Bektur Ryskeldiev
Towards a Multimodal Adaptive Lighting System for Visually Impaired Children
Euan Freeman, Graham Wilson, and Stephen Brewster
Multimodal Affective Feedback: Combining Thermal, Vibrotactile, Audio and Visual Signals
Graham Wilson, Euan Freeman, and Stephen Brewster
Niki and Julie: A Robot and Virtual Human for Studying Multimodal Social Interaction
Ron Artstein, David Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, and Mikio Nakano
A Demonstration of Multimodal Debrief Generation for AUVs, Post-mission and In-mission
Helen Hastie, Xingkun Liu, and Pedro Patron
Laughter Detection in the Wild: Demonstrating a Tool for Mobile Social Signal Processing and Visualization
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André
Active Speaker Detection with Audio-Visual Co-Training
Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, and Hugo Van Hamme
Panoptic Studio: A Massively Multiview System for Social Interaction Capture (Research exhibit)
Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh
16:00-18:00 Doctoral Spotlight Session
Session Chair: Dirk Heylen and Samer Al Moubayed
The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans
Maike Paetzel
Viewing Support System for Multi-view Videos
Xueting Wang
Engaging Children with Autism in a Shape Perception Task using a Haptic Force Feedback Interface
Alix Perusseau-Lambert
Modeling User's Decision Process through Gaze Behavior
Kei Shimonishi
Multimodal Positive Computing System for Public Speaking with Real-Time Feedback
Fiona Dermody
Prediction/Assessment of Communication Skill Using Multimodal Cues in Social Interactions
Sowmya Rasipuram
Player/Avatar Body Relations in Multimodal Augmented Reality Games
Nina Rosa
Computational Model for Interpersonal Attitude Expression
Soumia Dermouche
Assessing Symptoms of Excessive SNS Usage Based on User Behavior and Emotion
Ploypailin Intapong
Kawaii Feeling Estimation by Product Attributes and Biological Signals
Tipporn Laohakangvalvit
Multimodal Sensing of Affect Intensity
Shalini Bhatia
Enriching Student Learning Experience Using Augmented Reality and Smart Learning Objects
Anmol Srivastava
Automated recognition of facial expressions authenticity
Krystian Radlak
Improving the Generalizability of Emotion Recognition Systems: Towards Emotion Recognition in the Wild
Biqiao Zhang
18:00 Welcome Reception

Monday, 14 November 2016 (Miraikan)

All sessions will take place in the Miraikan Hall,
except for the Demos that will be in the Innovation Hall, and the Posters will be in the Conference Room 3
Nominees of Best Paper Award and Student Best Paper Award are marked with a star (*)

09:00-10:00 Keynote 2: Learning to Generate Images and Their Descriptions
Prof. Richard Zemel
Session Chair: Carlos Busso
10:00-10:20 Coffee Break
10:20-12:00 Oral Session 3: Groups, Teams and Meetings
Session Chair: Nick Campbell (Trinity College Dublin)
10:20 Meeting Extracts for Discussion Summarization Based on Multimodal Nonverbal Information
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
10:45 Getting to Know You: A Multimodal Investigation of Team Behavior and Resilience to Stress
Catherine Neubauer, Joshua Woolley, Peter Khooshabeh, and Stefan Scherer
11:10 Measuring the Impact of Multimodal Behavioural Feedback Loops on Social Interactions
Ionut Damian, Tobias Baur, and Elisabeth André
11:35 Analyzing Mouth-Opening Transition Pattern for Predicting Next Speaker in Multi-party Meetings
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
12:00-13:20 Lunch
Conference Room 3
13:20-14:40 Oral Session 4: Personality and Emotion
Session Chair: Jill Lehman (Disney Research)
13:20 *Automatic Recognition of Self-Reported and Perceived Emotion: Does Joint Modeling Help?
Biqiao Zhang, Georg Essl, and Emily Mower Provost
13:45 Personality Classi cation and Behaviour Interpretation: An Approach Based on Feature Categories
Sheng Fang, Catherine Achard, and Séverine Dubuisson
14:10 Multiscale Kernel Locally Penalised Discriminant Analysis Exempli ed by Emotion Recognition in Speech
Xinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, and Björn Schuller
14:25 Estimating Self-Assessed Personality from Body Movements and Proximity in Crowded Mingling Scenarios
Laura Cabrera-Quiros, Ekin Gedik, and Hayley Hung
14:40-15:00 Coffee Break
15:00-17:00 Poster Session 2
Session Chair: TBA
Deep Learning Driven Hypergraph Representation for Image-Based Emotion Recognition
Yuchi Huang and Hanqing Lu
Towards a Listening Agent: A System Generating Audiovisual Laughs and Smiles to Show Interest
Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, and Thierry Dutoit
Sound Emblems for Affective Multimodal Output of a Robotic Tutor: A Perception Study
Helen Hastie, Pasquale Dente, Dennis Küster, and Arvid Kappas
Automatic Detection of Very Early Stage of Dementia through Multi- modal Interaction with Computer Avatars
Hiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, and Satoshi Nakamura
MobileSSI: Asynchronous Fusion for Social Signal Interpretation in the Wild
Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Sei- derer, and Elisabeth André
Language Proficiency Assessment of English L2 Speakers Based on Joint Analysis of Prosody and Native Language
Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, and Björn Schuller
Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution
Emad Barsoum, Cha Zhang, Cristian Canton Ferrer, and Zhengyou Zhang
Deep Multimodal Fusion for Persuasiveness Prediction
Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrušaitis, and Louis-Philippe Morency
Comparison of Three Implementations of HeadTurn: A Multimodal Interaction Technique with Gaze and Head Turns
Oleg Špakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, and Roope Raisamo
Effects of Multimodal Cues on Children's Perception of Uncanniness in a Social Robot
Maike Paetzel, Christopher Peters, Ingela Nyström, and Ginevra Cas- tellano
Multimodal Feedback for Finger-Based Interaction in Mobile Aug- mented Reality
Wolfgang Hürst and Kevin Vriens
Smooth Eye Movement Interaction using EOG Glasses
Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo
Active Speaker Detection with Audio-Visual Co-training
Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, and Hugo Van hamme
Detecting Emergent Leader in a Meeting Environment using Nonverbal Visual Features Only
Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, and Vittorio Murino
Stressful First Impressions in Job Interviews
Ailbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, and Daniel Gatica-Perez
15:00-17:00 Demo Session 2
Session Chair: Ronald Poppe and Ryo Ishii
Multimodal System for Public Speaking with Real Time Feedback: A Positive Computing Perspective
Fiona Dermody and Alistair Sutherland
Multimodal Biofeedback System Integrating Low-Cost Easy Sensing Devices
Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, and Mayu Yokoya
A Telepresence System using a Flexible Textile Display
Kana Kushida and Hideyuki Nakanishi
Large-Scale Multimodal Movie Dialogue Corpus
Ryu Yasuhara, Masashi Inoue, Ikuya Suga, and Tetsuo Kosaka
Immersive Virtual Reality with Multimodal Interaction and Streaming Technology
Wan-Lun Tsai, Yu-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, and Min-Chun Hu
Multimodal Interaction with the Autonomous Android ERICA
Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, and Tatsuya Kawahara
Ask Alice: An Artificial Retrieval of Information Agent
Michel Valstar, Catherine Pelachaud, Dirk Heylen, Angelo Cafaro, Soumia Dermouche, Alexandru Ghitulescu, Elisabeth Andre, Tobias Baur, Johannes Wagner, Laurent Durieu, Matthew Aylett, Blaise Potard, Eduardo Coutinho, Bjorn Schuller, Yue Zhang, Mariet Theune, Jelte van Waterschoot
Design of Multimodal Instructional Tutoring Agents using Augmented Reality and Smart Learning Objects
Anmol Srivastava and Pradeep Yammiyavar
AttentiveVideo: Quantifying Emotional Responses to Mobile Video Advertisements
Phuong Pham and Jingtao Wang
Young Merlin: An Embodied Conversational Agent in Virtual Reality
Ivan Gris, Diego Rivera, Alex Rayon, Adriana Camacho, and David Novick
Utilizing Multimodal Social Signal Processing to Assess Job Interview Videos – From Lab to Cloud
Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle Martin-Raugh, Harrison Kell, Chong Min Lee, and Su-Youn Yoon
Engaging and Comprehensible Summarization of Academic Presentations
Keith Curtis, Gareth J. F. Jones, and Nick Campbell
15:00-17:00 Grand Challenge Posters
Session Chair: TBA
17:00-18:00 ICMI Awardee Talk: Help me If you can: Towards Multiadaptive Interaction Platforms
Prof. Wolfgang Wahlster
Session Chairs: Elizabeth Andre
19:00-22:00 Banquet
Buses will be leaving at 18:15

Tuesday, 15 November 2016 (Miraikan)

All sessions will take place in the Miraikan Hall.

09:30 Keynote 3: Embodied Media: Expanding Human Capacity via Virtual Reality and Telexistence
Prof. Susumu Tachi
Session Chair: Toyoaki Nishida
10:30-11:00 Coffee Break
11:00-12:30 Oral Session 5: Gesture, Touch and Haptics
Session Chair: Sharon Oviatt (Incaa Designs)
11:00 *Analyzing the Articulation Features of Children's Touchscreen Gestures
Alex Shaw and Lisa Anthony
11:25 Reach Out and Touch Me: Effects of Four Distinct Haptic Technologies on Affective Touch in Virtual Reality
Imtiaj Ahmed, Ville Harjunen, Giulio Jacucci, Eve Hoggan, Niklas Rava- ja, and Michiel M. Spapé
11:50 Using Touchscreen Interaction Data to Predict Cognitive Workload
Philipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, and Wolfgang Rosenstiel
12:15 Exploration of Virtual Environments on Tablet: Comparison between Tactile and Tangible Interaction Techniques
Adrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, and Mehdi Ammi
12:30 Lunch
Conference Room 3
14:00-15:10 Oral Session 6: Skill Training and Assessment
Session Chair: Catherine Pelachaud (ISIR, University of Paris6)
14:00 *Understanding the Impact of Personal Feedback on Face-to-Face Interactions in the Workplace
Afra Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, and Fahim Kawsar
14:25 * Asynchronous Video Interviews vs. Face-to-Face Interviews For Communication Skill Measurement: A Systematic Study
Sowmya Rasipuram, Pooja Rao S. B., and Dinesh Babu Jayagopi
14:50 Context and Cognitive State Triggered Interventions for Mobile MOOC Learning
Xiang Xiao and Jingtao Wang
15:15 Native vs. Non-native Language Fluency Implications on Multimodal Interaction for Interpersonal Skills Training
Mathieu Chollet, Helmut Prendinger, and Stefan Scherer
15:40-16:00 Coffee Break
16:00-16:15 Grand Challenge Overview
The Fourth Emotion Recognition In The Wild Challenge (EmotiW) 2016
Abhinav Dhall, Roland Goecke, Jyoti Joshi and Tom Gedeon
16:15-17:30 ICMI Town Hall Meeting

Wednesday, 16 November 2016 (Time 24 Bldg.)

09:00-17:00 The Workshop on Multimodal Analyses enabling Arti cial Agents in Human-Machine Interaction (MA3HMI)
Chairs :Ronald Böck, Francesca Bonin, Nick Campbell, and Ronald Poppe
Program available here
Time24: Room 181
09:00-17:30 The 1st Workshop on Multi-Sensorial Approaches to Human-Food Interaction (Human-Food)
Chairs: Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, and Gijs Huisman
InvitedSpeaker: Takuji Narumi
Program available here
Time24: Room 183
13:30-17:30 The 1st Workshop on Embodied Interaction with Smart Environments (EISE)
Chairs: Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, and Britta Wrede
InvitedSpeaker: Takayuki Kanda
Program available here
Time24: Room 206
09:00-12:00 The Workshop on Social Learning and Multimodal Interaction for Designing Arti cial Agents (SLMIDAA)
Chairs :Mohamed Chetouani, Salvatore M. Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, and Gentiane Venture
Program available here
Time24: Room 206
09:00-15:30 The Workshop on Multimodal Virtual and Augmented Reality (MVAR)
Chairs: Wolfgang Hürst, Daisuke Iwai, and Prabhakaran Balakrishnan
InvitedSpeaker: Hsin-Ni Ho
Program available here
Time24: Room 182
09:00-17:05 The 2nd Workshop on Emotion Representations and Modelling for Companion Systems (ERM4CT)

The 2nd International Workshop on Advancements in Social Sig- nal Processing for Multimodal Interaction (ASSP4MI)
Chairs :Kim Hartmann, Ingo Siegert, Ali Albert Salah, Khiet P. Truong, Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, and Mohamed Chetouani
InvitedSpeaker: Neşe Alyüz, and Andreas Wendemuth
Program available here : ERM4CT | ASSP4MI  
Time24: Room 207

ICMI 2016 ACM International Conference on Multimodal Interaction. Copyright © 2015-2024