Home
Important Dates
Registration
Conference Programme
Conference Proceedings
Conference Venue
About Glasgow
Social Events
Accommodation
Keynote Speakers
Awards
People
Sponsorships
Workshops
Tutorials
Grand Challenge
Doctoral Consortium
Demonstrations
Author Instructions
Call for Papers
Call for Demos
Call for Grand Challenges
Call for Workshops
Call for Doctoral Consortium
Related Sites
Platinum Sponsors
Gold Sponsors
Silver Sponsors
Bronze Sponsors
Local Organisers
University of Glasgow, School of Computing Science
Glasgow Interactive Systems Section
Lord Provost and Glasgow City Council's Civic Reception
ICMI 2017- Proceedings of the 19th ACM International Conference on Multimodal Interaction
ICMI 2017- Proceedings of the 19th ACM International Conference on Multimodal Interaction
Full Citation in the ACM Digital Library
SESSION: Invited Talks
Gastrophysics: using technology to enhance the experience of food and drink (keynote)
Charles Spence
Collaborative robots: from action and interaction to collaboration (keynote)
Danica Kragic
Situated conceptualization: a framework for multimodal interaction (keynote)
Lawrence Barsalou
Steps towards collaborative multimodal dialogue (sustained contribution award)
Phil Cohen
SESSION: Oral Session 1: Children and Interaction
Tablets, tabletops, and smartphones: cross-platform comparisons of children’s touchscreen interactions
Julia Woodward
Alex Shaw
Aishat Aloba
Ayushi Jain
Jaime Ruiz
Lisa Anthony
Toward an efficient body expression recognition based on the synthesis of a neutral movement
Arthur Crenn
Alexandre Meyer
Rizwan Ahmed Khan
Hubert Konik
Saida Bouakaz
Interactive narration with a child: impact of prosody and facial expressions
Ovidiu Șerban
Mukesh Barange
Sahba Zojaji
Alexandre Pauchet
Adeline Richard
Emilie Chanoni
Comparing human and machine recognition of children’s touchscreen stroke gestures
Alex Shaw
Jaime Ruiz
Lisa Anthony
SESSION: Oral Session 2: Understanding Human Behaviour
Virtual debate coach design: assessing multimodal argumentation performance
Volha Petukhova
Tobias Mayer
Andrei Malchanau
Harry Bunt
Predicting the distribution of emotion perception: capturing inter-rater variability
Biqiao Zhang
Georg Essl
Emily Mower Provost
Automatically predicting human knowledgeability through non-verbal cues
Abdelwahab Bourai
Tadas Baltrušaitis
Louis-Philippe Morency
Pooling acoustic and lexical features for the prediction of valence
Zakaria Aldeneh
Soheil Khorram
Dimitrios Dimitriadis
Emily Mower Provost
SESSION: Oral Session 3: Touch and Gesture
Hand-to-hand: an intermanual illusion of movement
Dario Pittera
Marianna Obrist
Ali Israr
An investigation of dynamic crossmodal instantiation in TUIs
Feng Feng
Tony Stockman
“Stop over there”: natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving
Robert Tscharn
Marc Erich Latoschik
Diana Löffler
Jörn Hurtienne
Pre-touch proxemics: moving the design space of touch targets from still graphics towards proxemic behaviors
Ilhan Aslan
Elisabeth André
Freehand grasping in mixed reality: analysing variation during transition phase of interaction
Maadh Al-Kalbani
Maite Frutos-Pascual
Ian Williams
Rhythmic micro-gestures: discreet interaction on-the-go
Euan Freeman
Gareth Griffiths
Stephen A. Brewster
SESSION: Oral Session 4: Sound and Interaction
Evaluation of psychoacoustic sound parameters for sonification
Jamie Ferguson
Stephen A. Brewster
Utilising natural cross-modal mappings for visual control of feature-based sound synthesis
Augoustinos Tsiros
Grégory Leplâtre
SESSION: Oral Session 5: Methodology
Automatic classification of auto-correction errors in predictive text entry based on EEG and context information
Felix Putze
Maik Schünemann
Tanja Schultz
Wolfgang Stuerzlinger
Cumulative attributes for pain intensity estimation
Joy O. Egede
Michel Valstar
Towards the use of social interaction conventions as prior for gaze model adaptation
Rémy Siegfried
Yu Yu
Jean-Marc Odobez
Multimodal sentiment analysis with word-level fusion and reinforcement learning
Minghai Chen
Sen Wang
Paul Pu Liang
Tadas Baltrušaitis
Amir Zadeh
Louis-Philippe Morency
IntelliPrompter: speech-based dynamic note display interface for oral presentations
Reza Asadi
Ha Trinh
Harriet J. Fell
Timothy W. Bickmore
SESSION: Oral Session 6: Artificial Agents and Wearable Sensors
Head and shoulders: automatic error detection in human-robot interaction
Pauline Trung
Manuel Giuliani
Michael Miksch
Gerald Stollnberger
Susanne Stadler
Nicole Mirnig
Manfred Tscheligi
The reliability of non-verbal cues for situated reference resolution and their interplay with language: implications for human robot interaction
Stephanie Gross
Brigitte Krenn
Matthias Scheutz
Do you speak to a human or a virtual agent? automatic analysis of user’s social cues during mediated communication
Magalie Ochs
Nathan Libermann
Axel Boidin
Thierry Chaminade
Estimating verbal expressions of task and social cohesion in meetings by quantifying paralinguistic mimicry
Marjolein C. Nanninga
Yanxia Zhang
Nale Lehmann-Willenbrock
Zoltán Szlávik
Hayley Hung
Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks
Terry T. Um
Franz M. J. Pfister
Daniel Pichler
Satoshi Endo
Muriel Lang
Sandra Hirche
Urban Fietzek
Dana Kulić
SESSION: Poster Session 1
Automatic assessment of communication skill in non-conventional interview settings: a comparative study
Pooja Rao S. B
Sowmya Rasipuram
Rahul Das
Dinesh Babu Jayagopi
Low-intrusive recognition of expressive movement qualities
Radoslaw Niewiadomski
Maurizio Mancini
Stefano Piana
Paolo Alborno
Gualtiero Volpe
Antonio Camurri
Digitising a medical clerking system with multimodal interaction support
Harrison South
Martin Taylor
Huseyin Dogan
Nan Jiang
GazeTap: towards hands-free interaction in the operating room
Benjamin Hatscher
Maria Luz
Lennart E. Nacke
Norbert Elkmann
Veit Müller
Christian Hansen
Boxer: a multimodal collision technique for virtual objects
Byungjoo Lee
Qiao Deng
Eve Hoggan
Antti Oulasvirta
Trust triggers for multimodal command and control interfaces
Helen Hastie
Xingkun Liu
Pedro Patron
TouchScope: a hybrid multitouch oscilloscope interface
Matthew Heinz
Sven Bertel
Florian Echtler
A multimodal system to characterise melancholia: cascaded bag of words approach
Shalini Bhatia
Munawar Hayat
Roland Goecke
Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls
Vikram Ramanarayanan
Chee Wee Leong
David Suendermann-Oeft
Keelan Evanini
Modelling fusion of modalities in multimodal interactive systems with MMMM
Bruno Dumas
Jonathan Pirau
Denis Lalanne
Temporal alignment using the incremental unit framework
Casey Kennington
Ting Han
David Schlangen
Multimodal gender detection
Mohamed Abouelenien
Verónica Pérez-Rosas
Rada Mihalcea
Mihai Burzo
How may I help you? behavior and impressions in hospitality service encounters
Skanda Muralidhar
Marianne Schmid Mast
Daniel Gatica-Perez
Tracking liking state in brain activity while watching multiple movies
Naoto Terasawa
Hiroki Tanaka
Sakriani Sakti
Satoshi Nakamura
Does serial memory of locations benefit from spatially congruent audiovisual stimuli? investigating the effect of adding spatial sound to visuospatial sequences
Benjamin Stahl
Georgios Marentakis
ZSGL: zero shot gestural learning
Naveen Madapana
Juan Wachs
Markov reward models for analyzing group interaction
Gabriel Murray
Analyzing first impressions of warmth and competence from observable nonverbal cues in expert-novice interactions
Beatrice Biancardi
Angelo Cafaro
Catherine Pelachaud
The NoXi database: multimodal recordings of mediated novice-expert interactions
Angelo Cafaro
Johannes Wagner
Tobias Baur
Soumia Dermouche
Mercedes Torres Torres
Catherine Pelachaud
Elisabeth André
Michel Valstar
Head-mounted displays as opera glasses: using mixed-reality to deliver an egalitarian user experience during live events
Carl Bishop
Augusto Esteves
Iain McGregor
SESSION: Poster Session 2
Analyzing gaze behavior during turn-taking for estimating empathy skill level
Ryo Ishii
Shiro Kumano
Kazuhiro Otsuka
Text based user comments as a signal for automatic language identification of online videos
A. Seza Doğruöz
Natalia Ponomareva
Sertan Girgin
Reshu Jain
Christoph Oehler
Gender and emotion recognition with implicit user signals
Maneesh Bilalpur
Seyed Mostafa Kia
Manisha Chawla
Tat-Seng Chua
Ramanathan Subramanian
Animating the adelino robot with ERIK: the expressive robotics inverse kinematics
Tiago Ribeiro
Ana Paiva
Automatic detection of pain from spontaneous facial expressions
Fatma Meawad
Su-Yin Yang
Fong Ling Loy
Evaluating content-centric vs. user-centric ad affect recognition
Abhinav Shukla
Shruti Shriya Gullapuram
Harish Katti
Karthik Yadati
Mohan Kankanhalli
Ramanathan Subramanian
A domain adaptation approach to improve speaker turn embedding using face representation
Nam Le
Jean-Marc Odobez
Computer vision based fall detection by a convolutional neural network
Miao Yu
Liyun Gong
Stefanos Kollias
Predicting meeting extracts in group discussions using multimodal convolutional neural networks
Fumio Nihei
Yukiko I. Nakano
Yutaka Takase
The relationship between task-induced stress, vocal changes, and physiological state during a dyadic team task
Catherine Neubauer
Mathieu Chollet
Sharon Mozgai
Mark Dennison
Peter Khooshabeh
Stefan Scherer
Meyendtris: a hands-free, multimodal tetris clone using eye tracking and passive BCI for intuitive neuroadaptive gaming
Laurens R. Krol
Sarah-Christin Freytag
Thorsten O. Zander
AMHUSE: a multimodal dataset for HUmour SEnsing
Giuseppe Boccignone
Donatello Conte
Vittorio Cuculo
Raffaella Lanzarotti
GazeTouchPIN: protecting sensitive data on mobile devices using secure multimodal authentication
Mohamed Khamis
Mariam Hassib
Emanuel von Zezschwitz
Andreas Bulling
Florian Alt
Multi-task learning of social psychology assessments and nonverbal features for automatic leadership identification
Cigdem Beyan
Francesca Capozzi
Cristina Becchio
Vittorio Murino
Multimodal analysis of vocal collaborative search: a public corpus and results
Daniel McDuff
Paul Thomas
Mary Czerwinski
Nick Craswell
UE-HRI: a new dataset for the study of user engagement in spontaneous human-robot interactions
Atef Ben-Youssef
Chloé Clavel
Slim Essid
Miriam Bilac
Marine Chamoux
Angelica Lim
Mining a multimodal corpus of doctor’s training for virtual patient’s feedbacks
Chris Porhet
Magalie Ochs
Jorane Saubesty
Grégoire de Montcheuil
Roxane Bertrand
Multimodal affect recognition in an interactive gaming environment using eye tracking and speech signals
Ashwaq Alhargan
Neil Cooke
Tareq Binjammaz
SESSION: Demonstrations 1
Multimodal interaction in classrooms: implementation of tangibles in integrated music and math lessons
Jennifer Müller
Uwe Oestermeier
Peter Gerjets
Web-based interactive media authoring system with multimodal interaction
Bok Deuk Song
Yeon Jun Choi
Jong Hyun Park
Textured surfaces for ultrasound haptic displays
Euan Freeman
Ross Anderson
Julie Williamson
Graham Wilson
Stephen A. Brewster
Rapid development of multimodal interactive systems: a demonstration of platform for situated intelligence
Dan Bohus
Sean Andrist
Mihai Jalobeanu
MIRIAM: a multimodal chat-based interface for autonomous systems
Helen Hastie
Francisco Javier Chiyah Garcia
David A. Robb
Pedro Patron
Atanas Laskov
SAM: the school attachment monitor
Dong-Bach Vo
Mohammad Tayarani
Maki Rooksby
Rui Huan
Alessandro Vinciarelli
Helen Minnis
Stephen A. Brewster
The Boston Massacre history experience
David Novick
Laura Rodriguez
Aaron Pacheco
Aaron Rodriguez
Laura Hinojos
Brad Cartwright
Marco Cardiel
Ivan Gris Sepulveda
Olivia Rodriguez-Herrera
Enrique Ponce
Demonstrating TouchScope: a hybrid multitouch oscilloscope interface
Matthew Heinz
Sven Bertel
Florian Echtler
The MULTISIMO multimodal corpus of collaborative interactions
Maria Koutsombogera
Carl Vogel
Using mobile virtual reality to empower people with hidden disabilities to overcome their barriers
Matthieu Poyade
Glyn Morris
Ian Taylor
Victor Portela
SESSION: Demonstrations 2
Bot or not: exploring the fine line between cyber and human identity
Mirjam Wester
Matthew P. Aylett
David A. Braude
Modulating the non-verbal social signals of a humanoid robot
Amol Deshmukh
Bart Craenen
Alessandro Vinciarelli
Mary Ellen Foster
Thermal in-car interaction for navigation
Patrizia Di Campli San Vito
Stephen A. Brewster
Frank Pollick
Stuart White
AQUBE: an interactive music reproduction system for aquariums
Daisuke Sasaki
Musashi Nakajima
Yoshihiro Kanno
Real-time mixed-reality telepresence via 3D reconstruction with HoloLens and commodity depth sensors
Michal Joachimczak
Juan Liu
Hiroshi Ando
Evaluating robot facial expressions
Ruth Aylett
Frank Broz
Ayan Ghosh
Peter McKenna
Gnanathusharan Rajendran
Mary Ellen Foster
Giorgio Roffo
Alessandro Vinciarelli
Bimodal feedback for in-car mid-air gesture interaction
Gözel Shakeri
John H. Williamson
Stephen A. Brewster
A modular, multimodal open-source virtual interviewer dialog agent
Kirby Cofino
Vikram Ramanarayanan
Patrick Lange
David Pautler
David Suendermann-Oeft
Keelan Evanini
Wearable interactive display for the local positioning system (LPS)
Daniel M. Lofaro
Christopher Taylor
Ryan Tse
Donald Sofge
SESSION: Grand Challenge
From individual to group-level emotion recognition: EmotiW 5.0
Abhinav Dhall
Roland Goecke
Shreya Ghosh
Jyoti Joshi
Jesse Hoey
Tom Gedeon
Multi-modal emotion recognition using semi-supervised learning and multiple neural networks in the wild
Dae Ha Kim
Min Kyu Lee
Dong Yoon Choi
Byung Cheol Song
Modeling multimodal cues in a deep learning-based framework for emotion recognition in the wild
Stefano Pini
Olfa Ben Ahmed
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
Benoit Huet
Group-level emotion recognition using transfer learning from face identification
Alexandr Rassadin
Alexey Gruzdev
Andrey Savchenko
Group emotion recognition with individual facial emotion CNNs and global image based CNNs
Lianzhi Tan
Kaipeng Zhang
Kai Wang
Xiaoxing Zeng
Xiaojiang Peng
Yu Qiao
Learning supervised scoring ensemble for emotion recognition in the wild
Ping Hu
Dongqi Cai
Shandong Wang
Anbang Yao
Yurong Chen
Group emotion recognition in the wild by combining deep neural networks for facial expression classification and scene-context analysis
Asad Abbas
Stephan K. Chalup
Temporal multimodal fusion for video emotion classification in the wild
Valentin Vielzeuf
Stéphane Pateux
Frédéric Jurie
Audio-visual emotion recognition using deep transfer learning and multiple temporal models
Xi Ouyang
Shigenori Kawaai
Ester Gue Hua Goh
Shengmei Shen
Wan Ding
Huaiping Ming
Dong-Yan Huang
Multi-level feature fusion for group-level emotion recognition
B. Balaji
V. Ramana Murthy Oruganti
A new deep-learning framework for group emotion recognition
Qinglan Wei
Yijia Zhao
Qihua Xu
Liandong Li
Jun He
Lejun Yu
Bo Sun
Emotion recognition in the wild using deep neural networks and Bayesian classifiers
Luca Surace
Massimiliano Patacchiola
Elena Battini Sönmez
William Spataro
Angelo Cangelosi
Emotion recognition with multimodal features and temporal models
Shuai Wang
Wenxuan Wang
Jinming Zhao
Shizhe Chen
Qin Jin
Shilei Zhang
Yong Qin
Group-level emotion recognition using deep models on image scene, faces, and skeletons
Xin Guo
Luisa F. Polanía
Kenneth E. Barner
SESSION: Doctoral Consortium
Towards designing speech technology based assistive interfaces for children's speech therapy
Revathy Nayar
Social robots for motivation and engagement in therapy
Katie Winkle
Immersive virtual eating and conditioned food responses
Nikita Mae B. Tuanquin
Towards edible interfaces: designing interactions with food
Tom Gayler
Towards a computational model for first impressions generation
Beatrice Biancardi
A decentralised multimodal integration of social signals: a bio-inspired approach
Esma Mansouri-Benssassi
Human-centered recognition of children's touchscreen gestures
Alex Shaw
Cross-modality interaction between EEG signals and facial expression
Soheil Rayatdoost
Hybrid models for opinion analysis in speech interactions
Valentin Barriere
Evaluating engagement in digital narratives from facial data
Rui Huan
Social signal extraction from egocentric photo-streams
Maedeh Aghaei
Multimodal language grounding for improved human-robot collaboration: exploring spatial semantic representations in the shared space of attention
Dimosthenis Kontogiorgos
SESSION: Workshop Summaries
ISIAA 2017: 1st international workshop on investigating social interactions with artificial agents (workshop summary)
Thierry Chaminade
Fabrice Lefèvre
Noël Nguyen
Magalie Ochs
WOCCI 2017: 6th international workshop on child computer interaction (workshop summary)
Keelan Evanini
Maryam Najafian
Saeid Safavi
Kay Berkling
MIE 2017: 1st international workshop on multimodal interaction for education (workshop summary)
Gualtiero Volpe
Monica Gori
Nadia Bianchi-Berthouze
Gabriel Baud-Bovy
Paolo Alborno
Erica Volta
Playlab: telling stories with technology (workshop summary)
Julie Williamson
Tom Flint
Chris Speed
MHFI 2017: 2nd international workshop on multisensorial approaches to human-food interaction (workshop summary)
Carlos Velasco
Anton Nijholt
Marianna Obrist
Katsunori Okajima
Rick Schifferstein
Charles Spence