ICMI 2017 Conference Programme

The full conference schedule will be up soon. The papers accepted for oral and poster presentation are listed below.


List of Accepted Papers

  • Automatic Assessment of Communication Skill in Non-Conventional Interview Settings: A Comparative Study
    Pooja Rao S B, Sowmya Rasipuram, Rahul Das, Dinesh Babu Jayagopi

  • Low-Intrusive Recognition of Expressive Movement Qualities
    Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, Antonio Camurri

  • Digitising a Medical Clerking System with Multimodal Interaction Support
    Harrison South, Martin Taylor, Huseyin Dogan, Nan Jiang

  • GazeTap: Towards Hands-Free Interaction in the Operating Room
    Benjamin Hatscher, Maria Luz, Lennart Nacke, Veit Müller, Norbert Elkmann, Christian Hansen

  • Pooling Acoustic and Lexical Features for the Prediction of Valence
    Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, Emily Mower Provost

  • Comparing Collision Techniques for Virtual Objects
    Byungjoo Lee, Qiao Deng, Eve Hoggan, Antti Oulasvirta

  • Tablets, Tabletops, and Smartphones: Cross-Platform Comparisons of Children's Touchscreen Interactions
    Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, Lisa Anthony

  • Toward an Efficient Body Expression Recognition based on the Synthesis of a Neutral Movement
    Arthur Crenn, Alexandre Meyer, Rizwan Khan, Saida Bouakaz, Hubert Konik

  • Trust Triggers for Multimodal Command and Control Interfaces
    Helen Hastie, Xingkun Liu, Pedro Patron

  • TouchScope: a Hybrid Multitouch Oscilloscope Interface
    Matthew Heinz, Sven Bertel, Florian Echtler

  • A Multimodal System to Characterise Melancholia - Cascaded Bag of Words Approach
    Shalini Bhatia, Munawar Hayat, Roland Goecke

  • Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls
    Vikram Ramanarayanan, Chee Wee Leong, David Suendermann

  • Modelling Fusion of Modalities in Multimodal Interactive Systems with MMMM
    Bruno Dumas, Jonathan Pirau, Denis Lalanne

  • Temporal Alignment using the Incremental Unit Framework
    Casey Kennington, Ting Han, David Schlangen

  • Multimodal Gender Detection
    Mohamed Abouelenien, Veronica Perez-Rosas, Rada Mihalcea, Mihai Burzo

  • How May I Help You? Behavior and Impressions in Hospitality Service Encounters
    Skanda Muralidhar, Marianne Schmid Mast, Daniel Gatica-Perez

  • Tracking Liking State in Brain Activity while Watching Multiple Movies
    Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, Satoshi Nakamura

  • Does Serial Memory of Locations Benefit from Spatially Congruent Audiovisual Stimuli?
    Benjamin Stahl, Georgios Marentakis

  • Zero Shot Gestural Learning (ZSGL)
    Naveen Madapana, Juan Wachs

  • Virtual Debate Coach Design: Assessing Multimodal Argumentation Performance
    Volha Petukhova, Tobias Mayer, Andrei Malchanau, Harry Bunt

  • Freehand Grasping in Mixed Reality: Analysing Variation During Transition Phase of Interaction
    Maadh Al-Kalbani, Maite Frutos-Pascual, Ian Williams

  • Hand-to-Hand: An Intermanual Illusion of Movement
    Dario Pittera, Marianna Obrist, Ali Israr

  • Markov Reward Models for Analyzing Group Interaction
    Gabriel Murray

  • Analyzing First Impressions of Warmth and Competence from Observable Nonverbal Cues in Expert-Novice Interactions
    Beatrice Biancardi, Angelo Cafaro, Catherine Pelachaud

  • The NoXi Database: Multimodal Recordings of Mediated Novice-Expert Interactions
    Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth Andre, Michel Valstar

  • Head-Mounted Displays as Opera Glasses: Using Mixed-Reality to Deliver an Equalitarian User Experience During Live Events
    Carl Bishop, Augusto Esteves, Iain McGregor

  • An investigation of Near-Far crossmodal metaphor instantiation in TUIs
    Feng Feng, Tony Stockman

  • Evaluation of Psychoacoustically Modelled Acoustic Parameters for Sonification
    Jamie Ferguson, Stephen Brewster

  • Automatic Classification of Auto-Correction Errors in Predictive Text Entry Based on EEG and Context Information
    Felix Putze, Maik Schünemann, Tanja Schultz, Wolfgang Stuerzlinger

  • Head and Shoulders: Automatic Error Detection in Human-Robot Interaction
    Pauline Trung, Manuel Giuliani, Michael Miksch, Susanne Stadler, Nicole Mirnig, Manfred Tscheligi

  • Analyzing Gaze Behavior during Turn-taking for Estimating Empathy Skill Level
    Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka

  • "Stop over There" Natural Gesture and Speech Interaction for Non-Critical Spontaneous Intervention in Autonomous Driving
    Robert Tscharn, Marc Erich Latoschik, Diana Löffler, Jörn Hurtienne

  • Text Based User Comments as a Signal for Automatic Language Identification of Online Videos
    Ayse Seza Dogruoz, Natalia Ponomavera, Sertan Girgin, Reshu Jain, Christoph Oehler

  • Cumulative Attributes for Pain Intensity Estimation
    Joy Onyekachukwu Egede, Michel Valstar

  • Gender and Emotion Recognition with Implicit User Signals
    Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, Ramanathan Subramanian

  • Animating the Adelino robot with ERIK: the Expressive Robotics Inverse Kinematics
    Tiago Ribeiro, Ana Paiva

  • Predicting the Distribution of Emotion Perception: Capturing Inter-Rater Variability
    Biqiao Zhang, Emily Mower Provost, Georg Essl

  • Social Interaction Conventions as Prior for Gaze Model Adaptation
    Rémy Siegfried, Yu Yu, Jean-Marc Odobez

  • Automatic Detection of Pain from Spontaneous Facial Expressions
    Fatma Meawad, Su-Yin Yang, Fong Ling Loy

  • The Reliability of Non-verbal Cues for Situated Reference Resolution and their Interplay with Language - Implications for Human Robot Interaction
    Stephanie Gross, Brigitte Krenn, Matthias Scheutz

  • Evaluating Content-centric vs User-centric Ad Affect Recognition
    Abhinav Shukla, Shruti Gullapuram, Harish Katti, Karthik Yadati, Mohan Kankanhalli, Ramanathan Subramanian

  • Interactive Narration with a Child: Impact of Prosody and Facial Expressions
    Ovidiu Șerban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Melodie Ruinet, Adeline Richard, Emilie Chanoni

  • Utilising Natural Cross-Modal Mappings for Visual Control of Feature Based Sound Synthesis
    Augoustinos Tsiros, Grégory Leplâtre

  • Automatically Predicting Human Knowledgeability Through Non-Verbal Cues
    Abdelwahab Bourai, Tadas Baltrusaitis, Louis-Philippe Morency

  • A Domain Adaptation Approach to Improve Speaker Turn Embedding Using Face Representation
    Nam Le, Jean-Marc Odobez

  • Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning
    Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrusaitis, Amir Zadeh, Louis-Philippe Morency

  • Computer vision based fall detection by a convolutional neural network
    Miao Yu, Liyun Gong, Stefanos Kollias

  • Predicting Meeting Extracts in Group Discussions using Multimodal Convolutional Neural Networks
    Fumio Nihei, Yukiko I. Nakano, Yutaka Takase

  • The Relationship between Task-Induced Stress, Vocal Changes and Physiological State during a Dyadic Team Task
    Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Peter Khooshabeh, Stefan Scherer

  • Meyendtris: A Hands-Free, Multimodal Tetris Clone using Eye Tracking and Passive BCI for Intuitive Neuroadaptive Gaming
    Laurens R. Krol, Sarah-Christin Freytag, Thorsten O. Zander

  • AMHUSE: A Multimodal dataset for HUmour SEnsing
    Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, Raffaella Lanzarotti

  • Do you speak to a human or a virtual agent? Automatic analysis of user's social cues during mediated communication
    Magalie Ochs, Nathan Libermann, Axel Boidin, Thierry Chaminade

  • Pre-Touch Proxemics: Moving the Design Space of Touch Targets from Still Graphics towards Proxemic Behaviors
    Ilhan Aslan, Elisabeth Andre

  • GazeTouchPIN: Protecting Sensitive Data on Mobile Devices using Secure Multimodal Authentication
    Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, Florian Alt

  • Comparing Human and Machine Recognition of Children's Touchscreen Stroke Gestures
    Alex Shaw, Jaime Ruiz, Lisa Anthony

  • Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Prosodic Mimicry
    Marjolein Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltan Szlávik, Hayley Hung

  • Multi-Task Learning of Social Psychology Assessments and Nonverbal Features for Automatic Leadership Identification
    Cigdem Beyan, Francesca Capozzi, Cristina Becchio, Vittorio Murino

  • Multimodal Analysis of Vocal Collaborative Search: A Public Corpus and Results
    Daniel McDuff, Mary Czerwinski, Paul Thomas, Nick Craswell

  • UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
    Atef Ben Youssef, Miriam Bilac, Slim Essid, Chloé Clavel, Angelica Lim, Marine Chamoux

  • Rhythmic Micro-Gestures: Discreet Interaction On-the-Go
    Euan Freeman, Gareth Griffiths, Stephen Brewster

  • Mining a Multimodal Corpus of Doctor's Training for Virtual Patient's Feedbacks
    Cris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, Roxane Bertrand

  • Data Augmentation of Wearable Sensor Data for Parkinson's Disease Monitoring using Convolutional Neural Networks
    Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, Dana Kulić

  • IntelliPrompter: Speech-based Dynamic Note Display Interface for Oral Presentations
    Reza Asadi, Ha Trinh, Harriet Fell, Timothy Bickmore

  • Multimodal Sentiment Analysis using Hierarchical Fusion
    Navonil Majumder, Soujanya Poria, Devamanyu Hazarika, Amir Zadeh, Louis-Philippe Morency, Alexander Gelbukh


ICMI 2017 ACM International Conference on Multimodal Interaction. Copyright © 2017-2017