Home
Important Dates
Conference Program
Conference Proceedings
Keynote Speakers
People
Conference Venue
Local Information
Social Program
Registration
Accommodation
Transportation
2016 Bids
Challenges
Workshops
Doctoral Consortium
Demonstrations
Awards
Related Sites
Author Instructions
Call for Papers
Call for Demos
Call for Doctoral Consortium
Diamond Sponsors:
Gold Sponsor:
Official Airline:
ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction
MLA '14- Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge
MAPTRAITS '14- Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop
MMRWHRI '14- Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction
UM3I '14- Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions
RFMIR '14- Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges
ERM4HCI '14- Proceedings of the 2014 workshop on Emotion Recognition in the Wild Challenge and Workshop
GazeIn '14- Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality
ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction
ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction
Full Citation in the ACM Digital Library
SESSION: Keynote Address
Albert Ali Salah
Bursting our Digital Bubbles: Life Beyond the App
Yvonne Rogers
SESSION: Oral Session 1: Dialogue and Social Interaction
Toyoaki Nishida
Managing Human-Robot Engagement with Forecasts and...
um
... Hesitations
Dan Bohus
Eric Horvitz
Written Activity, Representations and Fluency as Predictors of Domain Expertise in Mathematics
Sharon Oviatt
Adrienne Cohen
Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings
Ryo Ishii
Kazuhiro Otsuka
Shiro Kumano
Junji Yamato
A Multimodal In-Car Dialogue System That Tracks The Driver's Attention
Spyros Kousidis
Casey Kennington
Timo Baumann
Hendrik Buschmeier
Stefan Kopp
David Schlangen
SESSION: Oral Session 2: Multimodal Fusion
Björn Schuller
Deep Multimodal Fusion: Combining Discrete Events and Continuous Signals
Héctor P. Martínez
Georgios N. Yannakakis
The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring
Joseph F. Grafsgaard
Joseph B. Wiggins
Alexandria Katarina Vail
Kristy Elizabeth Boyer
Eric N. Wiebe
James C. Lester
Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach
Sunghyun Park
Han Suk Shim
Moitreya Chatterjee
Kenji Sagae
Louis-Philippe Morency
Deception detection using a multimodal approach
Mohamed Abouelenien
Veronica Pérez-Rosas
Rada Mihalcea
Mihai Burzo
DEMONSTRATION SESSION: Demo Session 1
Kazuhiro Otsuka
Lale Akarun
Multimodal Interaction for Future Control Centers: An Interactive Demonstrator
Ferdinand Fuhrmann
Rene Kaiser
Emotional Charades
Stefano Piana
Alessandra Staglianò
Francesca Odone
Antonio Camurri
Glass Shooter: Exploring First-Person Shooter Game Control with Google Glass
Chun-Yen Hsu
Ying-Chao Tung
Han-Yu Wang
Silvia Chyou
Jer-Wei Lin
Mike Y. Chen
Orchestration for Group Videoconferencing: An Interactive Demonstrator
Wolfgang Weiss
Rene Kaiser
Manolis Falelakis
Integrating Remote PPG in Facial Expression Analysis Framework
H. Emrah Tasli
Amogh Gudi
Marten Den Uyl
Context-Aware Multimodal Robotic Health Assistant
Vidyavisal Mangipudi
Raj Tumuluri
WebSanyog: A Portable Assistive Web Browser for People with Cerebral Palsy
Tirthankar Dasgupta
Manjira Sinha
Gagan Kandra
Anupam Basu
The hybrid Agent MARCO
Nicolas Riesterer
Christian Becker Asano
Julien Hué
Christian Dornhege
Bernhard Nebel
Towards Supporting Non-linear Navigation in Educational Videos
Kuldeep Yadav
Kundan Shrivastava
Om Deshmukh
POSTER SESSION: Poster Session 1
Oya Aran
Louis-Philippe Morency
Detecting conversing groups with a single worn accelerometer
Hayley Hung
Gwenn Englebienne
Laura Cabrera Quiros
Identification of the Driver's Interest Point using a Head Pose Trajectory for Situated Dialog Systems
Young-Ho Kim
Teruhisa Misu
An Explorative Study on Crossmodal Congruence Between Visual and Tactile Icons Based on Emotional Responses
Taekbeom Yoo
Yongjae Yoo
Seungmoon Choi
Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News
Joseph G. Ellis
Brendan Jou
Shih-Fu Chang
Dyadic Behavior Analysis in Depression Severity Assessment Interviews
Stefan Scherer
Zakia Hammal
Ying Yang
Louis-Philippe Morency
Jeffrey F. Cohn
Touching the Void -- Introducing CoST: Corpus of Social Touch
Merel M. Jung
Ronald Poppe
Mannes Poel
Dirk K.J. Heylen
Unsupervised Domain Adaptation for Personalized Facial Emotion Recognition
Gloria Zen
Enver Sangineto
Elisa Ricci
Nicu Sebe
Predicting Influential Statements in Group Discussions using Speech and Head Motion Information
Fumio Nihei
Yukiko I. Nakano
Yuki Hayashi
Hung-Hsuan Hung
Shogo Okada
The Relation of Eye Gaze and Face Pose: Potential Impact on Speech Recognition
Malcolm Slaney
Andreas Stolcke
Dilek Hakkani-Tür
Speech-Driven Animation Constrained by Appropriate Discourse Functions
Najmeh Sadoughi
Yang Liu
Carlos Busso
Many Fingers Make Light Work: Non-Visual Capacitive Surface Exploration
Martin Halvey
Andy Crossan
Multimodal Interaction History and its use in Error Detection and Recovery
Felix Schüssel
Frank Honold
Miriam Schmidt
Nikola Bubalo
Anke Huckauf
Michael Weber
Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations
Radu-Daniel Vatavu
Lisa Anthony
Jacob O. Wobbrock
Personal Aesthetics for Soft Biometrics: A Generative Multi-resolution Approach
Cristina Segalin
Alessandro Perina
Marco Cristani
Synchronising Physiological and Behavioural Sensors in a Driving Simulator
Ronnie Taib
Benjamin Itzstein
Kun Yu
Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions
Henny Admoni
Brian Scassellati
Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues
Lei Chen
Gary Feng
Jilliam Joe
Chee Wee Leong
Christopher Kitchen
Chong Min Lee
Increasing Customers' Attention using Implicit and Explicit Interaction in Urban Advertisement
Matthias Wölfel
Luigi Bucchino
System for Presenting and Creating Smell Effects to Video
Risa Suzuki
Shutaro Homma
Eri Matsuura
Ken-ichi Okada
CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association
Andrew D. Wilson
Hrvoje Benko
Statistical Analysis of Personality and Identity in Chats Using a Keylogging Platform
Giorgio Roffo
Cinzia Giorgetta
Roberta Ferrario
Walter Riviera
Marco Cristani
Understanding Users' Perceived Difficulty of Multi-Touch Gesture Articulation
Yosra Rekik
Radu-Daniel Vatavu
Laurent Grisoni
A Multimodal Context-based Approach for Distress Assessment
Sayan Ghosh
Moitreya Chatterjee
Louis-Philippe Morency
Exploring a Model of Gaze for Grounding in Multimodal HRI
Gregor Mehlmann
Markus Häring
Kathrin Janowski
Tobias Baur
Patrick Gebhard
Elisabeth André
Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model
Alexandria Katarina Vail
Joseph F. Grafsgaard
Joseph B. Wiggins
James C. Lester
Kristy Elizabeth Boyer
Eye Gaze for Spoken Language Understanding in Multi-modal Conversational Interactions
Dilek Hakkani-Tür
Malcolm Slaney
Asli Celikyilmaz
Larry Heck
SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces
Koray Tahiroğlu
Thomas Svedström
Valtteri Wikström
Simon Overstall
Johan Kildal
Teemu Ahmaniemi
Investigating Intrusiveness of Workload Adaptation
Felix Putze
Tanja Schultz
SESSION: Keynote Address 2
Louis-Philippe Morency
Smart Multimodal Interaction through Big Data
Cafer Tosun
SESSION: Oral Session 3: Affect and Cognitive Modeling
Sharon Oviatt
Natural Communication about Uncertainties in Situated Interaction
Tomislav Pejsa
Dan Bohus
Michael F. Cohen
Chit W. Saw
James Mahoney
Eric Horvitz
The SWELL Knowledge Work Dataset for Stress and User Modeling Research
Saskia Koldijk
Maya Sappelli
Suzan Verberne
Mark A. Neerincx
Wessel Kraaij
Rhythmic Body Movements of Laughter
Radoslaw Niewiadomski
Maurizio Mancini
Yu Ding
Catherine Pelachaud
Gualtiero Volpe
Automatic Blinking Detection towards Stress Discovery
Alvaro Marcos-Ramiro
Daniel Pizarro-Perez
Marta Marron-Romera
Daniel Pizarro-Perez
Daniel Gatica-Perez
SESSION: Oral Session 4: Nonverbal Behaviors
Bülent Sankur
Mid-air Authentication Gestures: An Exploration of Authentication Based on Palm and Finger Motions
Ilhan Aslan
Andreas Uhl
Alexander Meschtscherjakov
Manfred Tscheligi
Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors
Marwa M. Mahmoud
Tadas Baltrušaitis
Peter Robinson
Capturing Upper Body Motion in Conversation: An Appearance Quasi-Invariant Approach
Alvaro Marcos-Ramiro
Daniel Pizarro-Perez
Marta Marron-Romera
Daniel Gatica-Perez
User Independent Gaze Estimation by Exploiting Similarity Measures in the Eye Pair Appearance Eigenspace
Nanxiang Li
Carlos Busso
SESSION: Doctoral Spotlight Session
Marco Cristani
Exploring multimodality for translator-computer interaction
Julián Zapata
Towards Social Touch Intelligence: Developing a Robust System for Automatic Touch Recognition
Merel M. Jung
Facial Expression Analysis for Estimating Pain in Clinical Settings
Karan Sikka
Realizing Robust Human-Robot Interaction under Real Environments with Noises
Takaaki Sugiyama
Speaker- and Corpus-Independent Methods for Affect Classification in Computational Paralinguistics
Heysem Kaya
The Impact of Changing Communication Practices
Ailbhe N. Finnerty
Multi-Resident Human Behaviour Identification in Ambient Assisted Living Environments
Hande Alemdar
Gaze-Based Proactive User Interface for Pen-Based Systems
Çağla Çığ
Appearance based user-independent gaze estimation
Nanxiang Li
Affective Analysis of Abstract Paintings Using Statistical Analysis and Art Theory
Andreza Sartori
The Secret Language of Our Body: Affect and Personality Recognition Using Physiological Signals
Julia Wache
Perceptions of Interpersonal Behavior are Influenced by Gender, Facial Expression Intensity, and Head Pose
Jeffrey M. Girard
Authoring Communicative Behaviors for Situated, Embodied Characters
Tomislav Pejsa
Multimodal Analysis and Modeling of Nonverbal Behaviors during Tutoring
Joseph F. Grafsgaard
SESSION: Keynote Address 3
Oya Aran
Computation of Emotions
Peter Robinson
SESSION: Oral Session 5: Mobile and Urban Interaction
Metin Sezgin
Non-Visual Navigation Using Combined Audio Music and Haptic Cues
Emily Fujimoto
Matthew Turk
Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions
Euan Freeman
Stephen Brewster
Vuokko Lantz
Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data
Andrey Bogomolov
Bruno Lepri
Jacopo Staiano
Nuria Oliver
Fabio Pianesi
Alex Pentland
Impact of Coordinate Systems on 3D Manipulations in Mobile Augmented Reality
Philipp Tiefenbacher
Steven Wichert
Daniel Merget
Gerhard Rigoll
SESSION: Oral Session 6: Healthcare and Assistive Technologies
Dan Bohus
Digital Reading Support for The Blind by Multimodal Interaction
Yasmine N. El-Glaly
Francis Quek
Measuring Child Visual Attention using Markerless Head Tracking from Color and Depth Sensing Cameras
Jonathan Bidwell
Irfan A. Essa
Agata Rozga
Gregory D. Abowd
Bi-Modal Detection of Painful Reaching for Chronic Pain Rehabilitation Systems
Temitayo A. Olugbade
M.S. Hane Aung
Nadia Bianchi-Berthouze
Nicolai Marquardt
Amanda C. Williams
SESSION: Keynote Address 4
Jeffrey Cohn
A World without Barriers: Connecting the World across Languages, Distances and Media
Alexander Waibel
SESSION: The Second Emotion Recognition In The Wild Challenge
Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol
Abhinav Dhall
Roland Goecke
Jyoti Joshi
Karan Sikka
Tom Gedeon
Neural Networks for Emotion Recognition in the Wild
Michał Grosicki
Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion
Fabien Ringeval
Shahin Amiriparian
Florian Eyben
Klaus Scherer
Björn Schuller
Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild
Bo Sun
Liandong Li
Tian Zuo
Ying Chen
Guoyan Zhou
Xuewen Wu
Combining Modality-Specific Extreme Learning Machines for Emotion Recognition in the Wild
Heysem Kaya
Albert Ali Salah
Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild
Mengyi Liu
Ruiping Wang
Shaoxin Li
Shiguang Shan
Zhiwu Huang
Xilin Chen
Enhanced Autocorrelation in Real World Emotion Recognition
Sascha Meudt
Friedhelm Schwenker
Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning
JunKai Chen
Zenghai Chen
Zheru Chi
Hong Fu
Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The Wild
Xiaohua Huang
Qiuhai He
Xiaopeng Hong
Guoying Zhao
Matti Pietikainen
Emotion Recognition in Real-world Conditions with Acoustic and Visual Features
Maxim Sidorov
Wolfgang Minker
SESSION: Workshop Overviews
ERM4HCI 2014: The 2nd Workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems
Kim Hartmann
Björn Schuller
Ronald Böck
Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Hung-Hsuan Huang
Roman Bednarik
Kristiina Jokinen
Yukiko I. Nakano
MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions
Oya Celiktutan
Florian Eyben
Evangelos Sariyanidi
Hatice Gunes
Björn Schuller
MLA'14: Third Multimodal Learning Analytics Workshop and Grand Challenges
Xavier Ochoa
Marcelo Worsley
Katherine Chiluiza
Saturnino Luz
ICMI 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction
Mary Ellen Foster
Manuel Giuliani
Ronald Petrick
An Outline of Opportunities for Multimodal Research
Dirk Heylen
Alessandro Vinciarelli
UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions
Samer Al Moubayed
Dan Bohus
Anna Esposito
Dirk Heylen
Maria Koutsombogera
Harris Papageorgiou
Gabriel Skantze