Accepted papers

For program, please visit here.

Main Conference: Long papers

SmellControl: The Study of Sense of Agency in Smell
Patricia Cornelio,Emanuela Maggioni,Giada Brianza, Sriram Subramanian, Marianna Obrist

Mimicker-in-the-Browser: A Novel Interaction Using Mimicry to Augment the Browsing Experience
Riku Arakawa,Hiromu Yakura

Eliciting Emotion with Vibrotactile Stimuli Evocative of Real-World Sensations
Shaun Alexander Macdonald,Stephen Brewster,Frank Pollick

Speaker-Invariant Adversarial Domain Adaptation for Emotion Recognition
Yufeng Yin,Baiyu Huang,Yizhen Wu,Mohammad Soleymani

Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions
Bernd Dudzik,Joost Broekens,Mark Neerincx,Hayley Hung

Gesticulator: A Framework for Semantically-aware Speech-driven Gesture Generation
Taras Kucherenko,Patrik Jonell,Sanne van Waveren,Gustav Eje Henter,Simon Alexandersson,Iolanda Leite,Hedvig Kjellström

Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle
Amr Gomaa,Guillermo Reyes,Alexandra Alles,Lydia Rupp,Michael Feld

"Was that successful?" On Integrating Proactive Meta-Dialogue in a DIY-Assistant using Multimodal Cues
Matthias Kraus,Marvin Schiller,Gregor Behnke,Pascal Bercher,Michael Dorna,Michael Dambier,Birte Glimm,Susanne Biundo,Wolfgang Minker

Facilitating Flexible Force Feedback Design with Feelix
Anke van Oosterhout,Miguel Bruns,Eve Hoggan

FeetBack: Augmenting Robotic Telepresence with Haptic Feedback on the Feet
Brennan Jones,Jens Maiero,Alireza Mogharrab,Ivan Abdo Aguliar,Ashu Adhikari,Bernhard Riecke,Ernst Kruijff,Carman Neustaedter,Robert W. Lindeman

MORSE: MultimOdal sentiment analysis for Real-life SEttings
Yiqun Yao,Veronica Perez-Rosas,Mohamed Abouelenien,Mihai Burzo

FilterJoint: Toward an Understanding of Whole-Body Gesture Articulation
Aishat Aloba,Julia Woodward,Lisa Anthony

Combining Auditory and Mid-Air Haptic Feedback for a Light Switch Button
Cisem Ozkul,David Geerts,Isa Rutten

Purring Wheel: Thermal and Vibrotactile Notifications on the Steering Wheel
Patrizia Di Campli San Vito,Stephen Brewster,Frank Pollick,Simon Thompson,Lee Skrypchuk,Alexandros Mouzakitis

LASO: Exploiting Locomotive and Acoustic Signatures over the Edge to Annotate IMU Data for Human Activity Recognition
Soumyajit Chatterjee,Avijoy Chakma,Aryya Gangopadhyay,Nirmalya Roy,Bivas Mitra,Sandip Chakraborty

Force9: Force-assisted Miniature Keyboard on Smartwearables
Lik Hang LEE,Ngo Yan Yeung,Tristan Braud,Tong Li,Xiang Su,Pan Hui

A Neural Architecture for Detecting User Confusion in Eye-tracking Data
Shane Sims,Cristina Conati

A Multi-modal system to assess cognition in children from their physical movements
Ashwin Ramesh Babu,Mohammad Zaki Zadeh,Ashish Jaiswal,Alexis Lueckenhoff,Maria Kyrarini,Fillia Makedon

LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness Understanding
Yanan Wang,Jianming Wu,Jinfa Huang,Gen Hattori,Yasuhiro Takishima,Shinya Wada,Rui Kimura,Jie Chen,Satoshi Kurihara

Hand-eye Coordination for Textual Difficulty Detection in Text Summarization
Jun Wang,Grace Ngai,Hong Va Leong

Preserving Privacy in Image-based Emotion Recognition through User Anonymization
Vansh Narula,Kexin Feng,Theodora Chaspari

Effects of Visual Locomotion and Tactile Stimuli Duration on the Emotional Dimensions of the Cutaneous Rabbit Illusion
Mounia Ziat,Katherine Chin,Roope Raisamo

You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing
Abdul Rafey Aftab,Michael von der Beeck,Michael Feld

Predicting Video Affect via Induced Affection in the Wild
Yi Ding,Radha Kumaran,Tianjiao Yang,Tobias Höllerer

Job Interviewer Android with Elaborate Follow-up Question Generation
Koji Inoue,Kohei Hara,Divesh Lala,Kenta Yamamoto,Shizuka Nakamura,Katsuya Takanashi,Tatsuya Kawahara

Modality Dropout for Improved Performance-driven Talking Faces
Ahmed Hussen Hussen Abdelaziz,Barry-John Theobald,Paul Dixon,Reinhard Knothe,Nick Apostoloff,Sachin Kajareker

Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset
Huili Chen,Yue Zhang,Felix Weninger,Rosalind Picard,Cynthia Breazeal,Hae Won Park

The WoNoWa Dataset: Investigating the Transactive Memory System in Small Group Interactions
Beatrice Biancardi,Lou Maisonnave-Couterou,Pierrick Renault,Brian Ravenet,Maurizio Mancini,Giovanna Varni

Is She Truly Enjoying the Conversation?: Analysis of Physiological Signals toward Adaptive Dialogue Systems
Shun Katada,Shogo Okada,Yuki Hirano,Kazunori Komatani

Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive Training
Lorcan Reidy,Dennis Chan,Charles Nduka,Hatice Gunes

How Good is Good Enough? The Impact of Errors in Single Person Action Classification on the Modeling of Group Interactions in Volleyball
Lian Beenhakker,Fahim Salim,Dees Postma,Robby van Delden,Dennis Reidsma,Bert-Jan van Beijnum

PiHearts: Resonating Experiences of Self and Others Enabled by a Tangible Somaesthetic Design
Ilhan Aslan,Andreas Seiderer,Chi Tai Dang,Simon Raedler,Elisabeth Andre

Multimodal Data Fusion based on the Global Workspace Theory
Cong Bao,Zafeirios Fountas,Temitayo Olugbade,Nadia Berthouze

Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality
Chris Zimmerer,Erik Wolf,Sara Wolf,Martin Fischbach,Jean-Luc Lugrin,Marc Erich Latoschik

BreathEasy: Assessing Respiratory Diseases Using Mobile Multimodal Sensors
Md Mahbubur Rahman,Mohsin Yusuf Ahmed,Tousif Ahmed,Bashima Islam,Viswam Nathan,Korosh Vatanparvar,Ebrahim Nemati,Daniel McCaffrey,Jilong Kuang,Jun Alex Gao

Multimodal Automatic Coding of Client Behavior in Motivational Interviewing
Leili Tavabi,Kalin Stefanov,Larry Zhang,Brian Borsari,Joshua D. Woolley,Stefan Scherer,Mohammad Soleymani

Towards Engagement Recognition of People with Dementia in Care Settings
Lars Steinert,Felix Putze,Dennis Küster,Tanja Schultz

The eyes know it: FakeET- An Eye-tracking Database to Understand Deepfake Perception
Parul Gupta,Komal Chugh,Abhinav Dhall,Ramanathan Subramanian

Did the Children Behave? Investigating the Relationship Between Attachment Condition and Child Computer Interaction
Dong-Bach Vo,Stephen Brewster,Alessandro Vinciarelli

Depression Severity Assessment for Adolescents at High Risk of Mental Disorders
Michal Muszynski,Jamie Zelazny,Jeffrey M. Girard,Louis-Philippe Morency

Fifty Shades of Green: Towards A Robust Measure of Intra-annotator Agreement for Continuous Signals
Brandon Booth,Shrikanth Narayanan

Influence of Electric Taste, Smell, Color, and Thermal Sensory Modalities on the Liking and Mediated Emotions of Virtual Flavor Perception
Nimesha Ranasinghe,Meetha Nesam James,Michael Gecawicz,Jonathan Roman Bland,David Smith

Gesture Enhanced Comprehension of Ambiguous Human-to-Robot Instructions
Dulanga Kaveesha Weerakoon Mudiyanselage,Vigneshwaran Subbaraju,Nipuni Hansika Karumpulli Arachchige,Tuan Tran,Qianli Xu,U-Xuan Tan,Joo Hwee Lim,Archan Misra

Introducing Representations of Facial Affect in Automated Multimodal Deception Detection
Leena Mathur,Maja J Mataric

Attention Sensing through Multimodal User Modeling in an Augmented Reality Guessing Game
Felix Putze,Dennis Küster,Timo Urban,Alexander Zastrow,Marvin Kampen

Understanding Applicants' Reactions to Asynchronous Video Interviews though Self-reports and Nonverbal Cues
Skanda Muralidhar,Emmanuelle P. Kleinlogel,Eric Mayor,Marianne Schmid Mast,Adrian Bangerter,Daniel Gatica-Perez

Incorporating Measures of Intermodal Coordination in Automated Analysis of Infant-Mother Interaction
Lauren Klein,Victor Ardulov,Kate Hu,Mohammad Soleymani,Alma Gharib,Barbara Thompson,Pat Levitt,Maja Mataric

Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue Scenarios
Sami Alperen Akgun,Moojan Ghafurian,Mark Crowley,Kerstin Dautenhahn

MSP-Face Corpus: A Natural Audiovisual Emotional Database
Andrea Vidal,Ali N. Salman,Wei-Cheng Lin,Carlos Busso

Bring the Environment to Life: A Sonification Module for People with Visual Impairments to Improve Situation Awareness
Angela Constantinescu,Monica Haurilet,Karin Müller,Vanessa Petrausch,Rainer Stiefelhagen

Detecting Depression in Less Than 10 Seconds: Impact of Speaking Time on Depression Detection Sensitivity
Nujud Aloshban,Anna Esposito,Alessandro Vinciarelli

Analysis of Face-Touching Behavior in Large Scale Social Interaction Dataset
CIGDEM BEYAN,Matteo Bustreo,Muhammad Shahid,Gian Luca Bailo,Nicolo Carissimi,Alessio Del Bue

Multimodal, Multiparty Modeling of Collaborative Problem Solving Performance
Shree Krishna Subburaj,Angela E.B. Stewart,Arjun Ramesh Rao,Sidney D'Mello

Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated Communication
Ryosuke Ueno,Yukiko I. Nakano,Jie Zeng,Fumio Nihei

StrategicReading: Understanding Complex Mobile Reading Strategies via Implicit Behavior Sensing
Wei Guo,Byeong-Young Cho,Jingtao Wang

Effect of modality on human and machine scoring of presentation videos
Haley Lepp,Chee Wee Leong,Katrina Roohr,Michelle P. Martin-Raugh,Vikram Ramanarayanan

MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice
Sarah Morrison-Smith,Aishat Aloba,Hangwei Lu,Brett Benda,Shaghayegh Esmaeili,Gianne Flores,Jesse Smith,Nikita Soni,Isaac Wang,Rejin Joy,Damon L. Woodard,Jaime Ruiz,Lisa Anthony

Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect Detection
Angela Vujic,Stephanie Tong,Rosalind Picard,Pattie Maes

Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations
Oswald Barral,Sebastien Lalle,Grigorii Guz,Alireza Iranpour,Cristina Conati

Toward Adaptive Trust Calibration for Level 2 Driving Automation
Kumar Akash,Neera Jain,Teruhisa Misu

Temporal Attention and Consistency Measuring for Video Question Answering
Lingyu Zhang,Richard J. Radke

Toward Multimodal Modeling of Emotional Expressiveness
Victoria Lin,Jeffrey M. Girard,Michael Sayette,Louis-Philippe Morency

Mitigating Biases in Multimodal Personality Assessment
Shen Yan,Di Huang,Mohammad Soleymani

Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics
Andrew Emerson,Nathan Henderson,Jonathan Rowe,Wookhee Min,Seung Lee,James Minogue,James Lester

Enhancing Affect Detection in Game-Based Learning Environments with Multimodal Conditional Generative Modeling
Nathan Henderson,Wookhee Min,Jonathan Rowe,James Lester

Main Conference: Short papers

Gaze Tracker Accuracy and Precision Measurements in Virtual Reality Headsets
Jari Kangas,Olli Koskinen,Roope Raisamo

Conventional and Non-conventional Job Interviewing Methods: A Comparative Study in Two Countries
kumar shubham,Emmanuelle Kleinlogel,Anaïs Butera,Marianne Schmid Mast,Dinesh Babu Jayagopi

OpenSense: A Platform for Multimodal Data Acquisition and Behavior Perception
Kalin Stefanov,Baiyu Huang,Zongjian Li,Mohammad Soleymani

Touch Recognition with Attentive End-to-End Model
Wail EL BANI,Mohamed Chetouani

A Comparison between Laboratory and Wearable Sensors in the Context of Physiological Synchrony
Jasper J. van Beers,Ivo V. Stuldreher,Nattapong Thammasan,Anne-Marie Brouwer

Examining the Link between Children's Cognitive Development and Touchscreen Interaction Patterns
Ziyang Chen,Yu-Peng Chen,Alex Shaw,Aishat Aloba,Pasha Antonenko,Lisa Anthony,Jaime Ruiz

The iCub Multisensor Datasets for Robot and Computer Vision Applications
Murat Kirtay,Ugo Albanese,Lorenzo Vannucci,Guido Schillaci,Cecilia Laschi,Egidio Falotico

Personalized Modeling of Real-World Vocalizations from Nonverbal Individuals
Jaya Narain,Kristina T. Johnson,Craig Ferguson,Amanda O'Brien,Tanya Talkar,Yue Zhang,Peter Wofford,Thomas Quatieri,Pattie Maes,Rosalind Picard

Automated Time Synchronization of Cough Events from Multimodal Sensors in Mobile Devices
Tousif Ahmed,Mohsin Yusuf Ahmed,Md Mahbubur Rahman,Ebrahim Nemati,Bashima Islam,Korosh Vatanparvar,Viswam Nathan,Daniel McCaffrey,Jilong Kuang,Jun Alex Gao

ROSMI: A Multimodal Corpus for Map-based Instruction-Giving
Miltiadis Marios Marios Katsakioris,Ioannis Konstas,Pierre Yves Mignotte,Helen Hastie

The Sensory Interactive Table: Exploring the Social Space of Eating
Roelof de Vries,Juliet Haarman,Emiel Harmsen,Dirk Heylen,Hermie Hermens

Multimodal Gated Information Fusion for Emotion Recognition from EEG Signals and Facial Behaviors
Soheil Rayatdoost,David Rudrauf,Mohammad Soleymani

Analyzing Nonverbal Behaviors along with Praising
Toshiki Onishi,Arisa Yamauchi,Ryo Ishii,Yushi Aono,Akihiro Miyata

Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice
Ronald Cumbal,José Lopes,Olov Engwall

Predicting the Effectiveness of Systematic Desensitization Through Virtual Reality for Mitigating Public Speaking Anxiety
Margaret Cordelia von Ebers,Ehsanul Haque Nirjhar,Amir Behzadan,Theodora Chaspari

Multimodal Assessment of Oral Presentations using HMMs
Everlyne Kimani,Prasanth Murali,Ameneh Shamekhi,Dhaval Parmar,Sumanth Bharadwaj Munikoti,Timothy Bickmore

Punchline Detection using Context-Aware Hierarchical Multimodal Fusion
Akshat Choube,Mohammad Soleymani

Leniency to those who confess? Predicting the Legal Judgement via Multi-Modal Analysis
Liang Yang,Jingjie Zeng,Tao Peng,Xi Luo,Hongfei Lin,Jinhui Zhang

Grand Challenges

Advanced Multi-Instance Learning Method with Multi-features Engineering and Conservative Optimization for Engagement Intensity Prediction
Jianming Wu, Bo Yang, Yanan Wang, Gen Hattori

Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition
Yanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara

A Multi-Modal Approach for Driver Gaze Prediction to Remove Identity Bias
Ze Hui Yu, Xiehe Huang, Zhang Xiubao, Haifeng Shen, Qun, Weihong Deng, Jian Tang, Yi Yang, Jieping Ye

Group-level Speech Emotion Recognition Utilising Deep Spectrum Features
Sandra Ottl, Shahin Amiriparian, Maurice Gerczuk, Vincent Karas, Bjoern Schuller

Multi-rate Attention Based GRU Model for Engagement Prediction
Bin Zhu, XINJIE LAN, Xin Guo, Kenneth Barner, Charles Boncelet

Fusical: Multimodal Fusion for Video Sentiment
Boyang Tom Jin,Leila Abdelrahman,Cong Kevin Chen,Amil Khanzada

X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild
Lukas Stappen, Georgios Rizos, Bjorn W. Schuller

Group Level Audio-Video Emotion Recognition Using Hybrid Networks
Chuanhe Liu, Minghao Wang, Wenqiang Jiang, Tianhao Tang

Group-Level Emotion Recognition using a unimodal privacy-safe non-individual approach
Anastasia Petrova, Dominique Vaufreydaz, Philippe Dessus

Recognizing Emotion in the Wild using Multimodal Data
Shivam Srivastava, Saandeep Lakshminarayan, Saurabh Hinduja, Sk Rahatul Jannat, Hamza Elhamdadi, Shaun Canavan

Multi-modal Fusion Using Spatio-temporal and Static Features for Group Emotion Recognition
Mo Sun

Extract the Gaze Multi-dimensional Information Analysis Driver Behavior
Kui Lyu,Minghao Wang,Liyu Meng

EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal based Challenges
Abhinav Dhall, Garima Sharma, Roland Goecke, Tom Gedeon

Doctoral Consortium

How to Complement Learning Analytics with Smartwatches?: Fusing Physical Activities, Environmental Context and Learning Activities
George-Petru Ciordas-Hertel

Multimodal Physiological Synchrony as Measure of Attentional Engagement
Ivo V. Stuldreher

Multimodal Groups' Analysis for Automated Cohesion Estimation
Lucien Maman

Towards Real-Time Multimodal Emotion Recognition among Couples
George Boateng

Towards Multimodal Human-Like Characteristics and Expressive Visual Prosody in Virtual Agents
Mireille Fares

Towards A Multimodal and Context-Aware Framework for Human Navigational Intent Inference
Zhitian Zhang

Personalised Human Device Interaction through Context aware Augmented Reality
Madhawa Perera

Automating Facilitation and Documentation of Collaborative Ideation Processes
Matthias Merk

Supporting instructors to provide emotional and instructional scaffolding for English language learners through biosensor-based feedback
Heera Lee

Detection of Micro-expression Recognition Based on Spatio-Temporal Modelling and Spatial Attention
Mengjiong Bai

Zero-Shot Learning for Gesture Recognition
Naveen Madapana,Juan Wachs

Robot Assisted Diagnosis of Autism in Children
B. Ashwini,Jainendra Shukla

Demonstrations and Exhibits

Alfie: An Interactive Robot with Moral Compass
Cigdem Turan,Patrick Schramowski,Constantin Rothkopf,Kristian Kersting

Spark Creativity by Speaking Enthusiastically - Communication Training using an E-Coach
Carla Viegas,Albert Lu,Annabel Su,Carter Strear,Yi Xu,Albert Topdjian,Daniel Limon,JJ Xu

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment
Alejandro Peña,Ignacio Serna,Aythami Morales,Julian Fierrez

LieCatcher: Game Framework for Collecting Human Judgments of Deceptive Speech
Sarah Ita Levitan,Xinyue Tan,Julia Hirschberg

The AI-Medic: A Multimodal Artificial Intelligent Mentor for Trauma Surgery
Edgar Rojas-Muñoz,Kyle Couperus,Juan Wachs

ICMI 2020 ACM International Conference on Multimodal Interaction. Copyright © 2019-2025