ICMI 2021 Conference Program

Tentative Program ACM ICMI 2021 (subject to change)

Main Program

Detailed program is provided below.

Tutorial on Ethics in AI and human-machine interaction

1- General introduction to ethical issues in AI and Human-machine interaction 09:00-09:30

2- Ethics in Extended Reality 09:30-10:00

3- Philosophical perspective and issues in social robots 10:00-10:30

4- Ethics in Natural Language Processing 10:45-11:15

5- Ethics in Human robot interaction and vulnerable persons 11:15-11:45


Times are in Montreal time zone (EDT). 8am EDT is 8pm CST in Beijing, 2pm in Amsterdam (CEST).

Day 1 Day 2 Day 3
7:45-8:00 Opening Session
8:00-9:00 Keynote (Russ Salakhutdinov)
Presenter: remote
Audience: hybrid
Keynote (Karon MacLean)
Presenter: physical
Audience: hybrid
Keynote (Susanne P. Lajoie)
Presenter: physical
Audience: hybrid
9:00-10:15 Oral Session
Presenter: hybrid
Audience: hybrid
Oral Session
Presenter: hybrid
Audience: hybrid
Oral Session
Presenter: hybrid
Audience: hybrid
10:15-10:30 Break Break Break
10:30-12:00 Oral Session
Presenter: hybrid
Audience: hybrid
Oral Session
Presenter: hybrid
Audience: hybrid
Oral Session
Presenter: hybrid
Audience: hybrid
12:00-12:15 Break Break Break
12:15-13:15 ICMI Sustained Accomplishment Award Keynote (Elisabeth Andre)
Presenter: physical
Audience: hybrid
Blue Sky Special Session, Awards and Moderated Audience Discussion
Presenter: physical or virtual
Audience: hybrid
ICMI Open Public Forum __ Closing session 1:05pm -1:15pm
13:15-15:00 Lunch/Discussion
all physical
Lunch/Discussion
all physical
Lunch/Discussion
all physical
15:00-17:00 Posters
physical and virtual
Posters/Demos
physical and virtual
Posters
physical and virtual
17:00-18:00 Happy Hour Reception
physical and virtual
Demos
virtual only
18:00-20:00 Dinner
all physical: on your own in the beautiful Old Port or at the Hotel
Award Banquet
physical and virtual
audience: hybrid
Dinner
all physical: on your own in the beautiful Old Port or at the Hotel
20:00-22:00 Repeat Posters
virtual only
Repeat Posters
virtual only
Repeat Posters
virtual only

Notes:

  • Keynotes may be in person or virtual.
  • Each Oral Session will have both virtual and in-person presentations
  • PPosters/posters presented in-person will also be given a slot for virtual presentation.

Workshops

Physical workshops on the day before the main program and the day after. More details will follow based on the organizers preferences.

-Workshops 18th:

1- The Second International Workshop on Automated Assessment of Pain (AAP) Half day


3- Insights on Group & Team Dynamics Full day


4- CATS2021: International Workshop on Corpora and Tools for Social Skills Annotation Full day


7- The 6th International Workshop on Affective Social Multimedia Computing (ASMMC 2021) Full day


8- Workshop on Multimodal Affect and Aesthetic Experience Full day


11- Socially-Informed AI for Healthcare – Understanding and Generating Multimodal Nonverbal Cues Full day


-Workshops 22nd:

2- 2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH) Half day


5- Workshop on modelling socio-emotional and cognitive processes from multimodal data in the wild Half day


6- 2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour Half day


9- Empowering Interactive Robots by Learning Through Multimodal Feedback Channels Half day


10- GENEA Workshop 2021: Generation and Evaluation of Non-verbal Behaviour for Embodied Agents Half day

Detailed Program

Detailed Program: Main Days (Oct 19-21)

Times are in Montreal time zone (EDT). 8am EDT is 8pm CST in Beijing, 2pm in Amsterdam (CEST).

Day 1: Oct 19 Session/Paper Name Chair/Authors
7:45-8:00 Opening Session
8:00-9:00 Keynote: Russ Salakhutdinov Carlos Busso and Catherine Pelachaud
From Differentiable Reasoning to Self-supervised Embodied Active Learning Russ Salakhutdinov
9:00-10:15 Oral session "New Analytic and Machine Learning Techniques" Carlos Busso and Catherine Pelachaud
9:00-9:15 A Contrastive Learning Approach for Compositional Zero-Shot Learning Muhammad Umer Anwaar, Rayyan Ahmad Khan, Zhihui Pan, Martin Kleinsteuber
9:15-9:30 🏆 Best Paper Nominee: Exploiting the Interplay between Social and Task Dimensions of Cohesion to Predict its Dynamics Leveraging Social Sciences Lucien Maman, Laurence Likforman-Sulem, Mohamed Chetouani, Giovanna Varni
9:30-9:45 Dynamic Mode Decomposition with Control as a Model of Multimodal Behavioral Coordination Lauren Klein, Victor Ardulov, Alma Gharib, Barbara Thompson, Pat Levitt, Maja Mataric
9:45-10:00 🏆 Best Paper Nominee: Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis Wei Han, Hui Chen, Alexander Gelbukh, Amir Zadeh, Louis-philippe Morency, Soujanya Poria
10:00-10:15 Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning Zhongwei Xie, Ling Liu, Lin Li, Luo Zhong
10:15-10:30 Break
10:30-12:00 Oral session "Support for Health, Mental Health and Disability" Mohamed Abouelenien and Mathieu Chollet
10:30-10:45 🏆 Best Paper Nominee: A Multimodal Dataset and Evaluation for Feature Estimators of Temporal Phases of Anxiety Hashini Senaratne, Levin Kuhlmann, Kirsten Ellis, Glenn Melvin, Sharon Oviatt
10:45-11:00 Inclusive Action Game Presenting Real-time Multimodal Presentations for Sighted and Blind Persons Masaki Matsuo, Takahiro Miura, Ken-ichiro Yabu, Atsushi Katagiri, Masatsugu Sakajiri, Junji Onishi, Takeshi Kurata, Tohru Ifukube
11:00-11:15 🏆 Best Paper Nominee: ViCA: Combining visual, Social, and Task-orientedconversational AI in a Healthcare Setting George Pantazopoulos, Jeremy Bruyere, Malvina Nikandrou, Thibaud Boissier, Supun Hemanthage, Binha Kumar Sachish, Vidyul Shah, Christian Dondrup, Oliver Lemon
11:15-11:30 Towards Sound Accessibility in Virtual Reality Dhruv Jain, Sasa Junuzovic, Eyal Ofek, Mike Sinclair, John R. Porter, Chris Yoon, Swetha Machanavajhala, Meredith Ringel Morris
11:30-11:45 Am I Allergic to This? Assisting Sight Impaired People in the Kitchen Elisa Ramil Brick, Vanesa Caballero Alonso, Conor O'Brien, Sheron Tong, Emilie Tavernier, Amit Parekh, Angus Addlesee, Oliver Lemon
11:45-12:00 MindfulNest: Strengthening Emotion Regulation with Tangible User Interfaces Samantha Speer, Emily Hamner, Michael Tasota, Lauren Zito, Sarah Byrne-Houser
12:00-12:15 Break
12:15-13:15 ICMI Sustained Accomplishment Award keynote: Elisabeth Andre (In person) Yukiko Nagano and Raj Tumuluri
Socially Interactive Artificial Intelligence: Past, Present and Future notes Elisabeth André
13:15-15:00 Lunch
15:00-17:00 Posters (Hybrid) Catherine Pelachaud
ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects From a Moving Vehicle Amr Gomaa, Guillermo Reyes, Michael Feld
Advances in Multimodal Behavioral Analytics for Early Dementia Diagnosis: A Review Chathurika Palliya Guruge, Sharon Oviatt, Pari Delir Haghighi, Elizabeth K Pritchard
ConAn: A Usable Tool for Multimodal Conversation Analysis Anna Penzkofer, Philipp Müller, Felix Christian Bühler, Sven Mayer, Andreas Bulling
Prediction of Interlocutor's Subjective Impressions based on Functional Head-Movement Features in Group Meetings Shumpei Otsuchi, Yoko Ishii, Momoko Nakatani, Kazuhiro Otsuka
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation Sarala Padi, Seyed Omid Sadjadi, Ram Sriram, Dinesh Manocha
ThermEarhook: Investigating Spatial Thermal Haptic Feedback on the Auricular Skin Area Arshad Nasser, Kexin Zheng, Kening Zhu
Investigating the Effect of Polarity in Auditory and Vibrotactile Displays Under Cognitive Load Jamie Ferguson, Euan Freeman, Stephen Brewster
User Preferences for Calming Affective Haptic Stimuli in Social Settings Shaun Alexander Macdonald, Euan Freeman, Stephen Brewster, Frank Pollick
Improving the Movement Synchrony Estimation with Action Quality Assessment in Children Play Therapy Jicheng Li, Anjana Bhat, Roghayeh Barmaki
Learning Oculomotor Behaviors from Scanpath Beibin Li, Nicholas Nuechterlein, Erin Barney, Claire Foster, Minah Kim, Monique Mahony, Adham Atyabi, Li Feng, Quan Wang, Pamela Ventola, Linda Shapiro, Frederick Shic
Multimodal Detection of Drivers Drowsiness and Distraction Kapotaksha Das, Salem Sharak, Kais Riani, Mohamed Abouelenien, Mihai Burzo, Michalis Papakostas
On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of College Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus Colocation Weichen Wang, Jialing Wu, Subigya Kumar Nepal, Alex daSilva, Elin Hedlund, Eilis Murphy, Courtney Rogers, Jeremy F. Huckins
Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics Surbhi Madan, Monika Gahalawat, Tanaya Guha, Ramanathan Subramanian
An Automated Mutual Gaze Detection Framework for Social Behavior Assessment in Therapy for Children with Autism Zhang Guo, Kangsoo Kim, Anjana Bhat, Roghayeh Barmaki
Inclusive Voice Interaction Techniques for Creative Object Positioning Farkhandah Aziz, Chris Creed, Maite Frutos-Pascual, Ian Williams
Interaction Modalities for Notification Signals in Augmented Reality May Jorella Lazaro, Sungho Kim, Jaeyong Lee, Jaemin Chun, Myung-Hwan Yun
PARA: Privacy Management and Control in Emerging IoT Ecosystems using Augmented Reality Carlos Bermejo Fernandez, Lik Hang Lee, Petteri Nurmi, Pan Hui
Feature Perception in Broadband Sonar Analysis - Using the Repertory Grid to Elicit Interface Designs to Support Human-Autonomy Teaming Faye McCabe, Christopher Baber
To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures Pieter Wolfert, Jeffrey M. Girard, Taras Kucherenko, Tony Belpaeme
Audiovisual Speech Synthesis using Tacotron2 Ahmed Hussen Abdelaziz, Anushree Prasanna Kumar, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou, Sachin Kajareker
What’s This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice Assistants Jaewook Lee, Sebastian S. Rodriguez, Raahul Natarrajan, Jacqueline Chen, Harsh Deep, Alex Kirlik
Graph Capsule Aggregation for Unaligned Multimodal Sequences Jianfeng Wu, Sijie Mai, Haifeng Hu
Design and Development of a Low-cost Device for Weight and Center of Gravity Simulation in Virtual Reality Diego Vilela Monteiro, Hai-Ning Liang, Xian Wang, Wenge Xu, Huawei Tu
17:00-18:00 Happy Hour Reception
18:00-20:00 Dinner
20:00-22:00 Repeat posters (Virtual) Catherine Pelachaud
Day 2: Oct 20 Session/Paper Name Chair/Authors
8:00-9:00 Keynote: Karon MacLean (In person) Zakia Hammal and Maria Ines Torres
Incorporating haptics into the theatre of multimodal experience design; and the ecosystem this requires Karon MacLean
9:00-10:15 Oral session "Conversation, Dialogue Systems and Language Analytics" Theodora Chaspari and Chee Wee Leong
9:00-9:15 🏆 Best Paper Nominee: A Systematic Cross-Corpus Analysis of Human Reactions to Robot Conversational Failures Dimosthenis Kontogiorgos, Minh Tran, Joakim Gustafson, Mohammad Soleymani
9:15-9:30 Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative Interaction Matthias Kraus, Nicolas Wagner, Wolfgang Minker
9:30-9:45 Recognizing Perceived Interdependence in Face-to-Face Negotiations through Multimodal Analysis of Nonverbal Behavior Bernd Dudzik, Simon Columbus, Tiffany Matej Hrkalovic, Daniel Balliet, Hayley Hung
9:45-10:00 Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal dialogue Systems Yuki Hirano, Shogo Okada, Kazunori Komatani
10:00-10:15 Decision-Theoretic Question Generation for Situated Reference Resolution: An Empirical Study and Computational Model Felix Gervits, Gordon Briggs, Antonio Roque, Genki A. Kadomatsu, Dean Thurston, Matthias Scheutz, Matthew Marge
10:15-10:30 Break
10:30-12:00 Oral session "Speech, Gesture and Haptics" Hayley Hung and Karon MacLean
10:30-10:45 Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation Riku Arakawa, Zendai Kashino, Shinnosuke Takamichi, Adrien Verhulst, Masahiko Inami
10:45-11:00 Hierarchical Classification and Transfer Learning to Recognize Head Gestures and Facial Expressions Using Earbuds Shkurta Gashi, Aaqib Saeed, Alessandra Vicini, Elena Di Lascio, Silvia Santini
11:00-11:15 Integrated Speech and Gesture Synthesis Siyang Wang, Simon Alexanderson, Joakim Gustafson, Jonas Beskow, Gustav Eje Henter, Éva Székely
11:15-11:30 Co-Verbal Touch: Enriching Video Telecommunications with Remote Touch Technology Angela Chan, Francis Quek, Takashi Yamauchi, Jinsil Hwaryoung Seo
11:30-11:45 HapticLock: Eyes-Free Authentication for Mobile Devices Gloria Dhandapani, Jamie Ferguson, Euan Freeman
11:45-12:00 The Impact of Prior Knowledge on the Effectiveness of Haptic and Visual Modalities for Teaching Forces Kern Qi, David Borland, Emily Brunsen, James Minogue, Tabitha C. Peck
12:00-12:15 Break
12:15-13:15 Blue Sky Special Session, Awards, and Moderated Audience Discussion Sharon Oviatt and Louis Philippe Morency
12:15-12:35 Optimized Human-AI Group Decision Making: A Personal View Alex Pentland
12:35-12:55 Towards Sonification in Multimodal and User-friendly Explainable Artificial Intelligence Björn Schuller, Tuomas Virtanen, Maria Riveiro, Georgios Rizos, Jing Han, Annamaria Mesaros, Konstantinos Drossos
12:55-13:15 Dependability and Safety: Two Clouds in the Blue Sky of Multimodal Interaction Philippe Palanque, David Navarre
13:15-15:00 Lunch
15:00-17:00 Posters/Demos (Hybrid) Nina Knieriemen
Cross-modal Assisted Training for Abnormal Event Recognition in Elevators Xinmeng Chen, Xuchen Gong, Ming Cheng, Qi Deng, Ming Li
Towards Automatic Narrative Coherence Prediction Filip Bendevski, Jumana Ibrahim, Tina Krulec, Theodore Waters, Nizar Habash, Hanan Salam, Himadri Mukherjee, Christin Camia
TaxoVec: Taxonomy Based Representation for Web User Profiling Qinpei Zhao, Xiongbaixue Yan, Yinjia Zhang, Weixiong Rao, Jiangfeng Li, Chao Mi, Jessie Chen
Approximating the Mental Lexicon from Clinical Interviews as a Support Tool for Depression Detection Esaú Villatoro Tello, Gabriela Ramírez-de-la-Rosa, Daniel Gatica-Perez, Mathew Magimai Doss, Héctor Jiménez-Salazar
Long-Term, in-the-Wild Study of Feedback about Speech Intelligibility for K-12 Students Attending Class via a Telepresence Robot Matthew Rueben, Mohammad Syed, Emily London, Mark Camarena, Eunsook Shin, Yulun Zhang, Timothy S. Wang, Thomas R. Groechel, Rhianna Lee, Maja J. Mataric
EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices Andy Kong, Karan Ahuja, Mayank Goel, Chris Harrison
Multimodal User Satisfaction Recognition for Non-task Oriented Dialogue Systems Wenqing Wei, Sixia Li, Shogo Okada, Kazunori Komatani
Cross Lingual Video and Text Retrieval: A New Benchmark Dataset and Algorithm Jayaprakash Akula, Abhishek, Rishabh Dabral, Preethi Jyothi, Ganesh Ramakrishnan
Interaction Techniques for 3D-positioning Objects in Mobile Augmented Reality Carl-Philipp Hellmuth, Miroslav Bachinski, Jörg Müller
Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Laughter Backchannel Generation Öykü Zeynep Bayramoğlu, Engin Erzin, T. Metin Sezgin, Yucel Yemez
Knowing Where and What to Write in Automated Live Video Comments: A Unified Multi-Task Approach Hao Wu, Gareth James Francis Jones, Francois Pitie
Tomato Dice: A Multimodal Device to Encourage Breaks during Work Marissa A. Thompson, Lynette Tan, Cecilia Soto, Jaitra Dixit, Mounia Ziat
Looking for Laughs: Gaze Interaction with Laughter Pragmatics and Coordination Chiara Mazzocconi, Vladislav Maraev, Vidya Somashekarappa, Christine Howes
Inflation-Deflation Networks for Recognizing Head-Movement in Face-to-Face Conversations Kazuki Takeda, Kazuhiro Otsuka
Perception of Ultrasound Haptic Focal Point Motion Euan Freeman, Graham Wilson
Sensorimotor Synchronization in Blind Musicians: Does Lack of Vision Influencenon-verbal Musical Communication? Erica Volta, Giulia Cappagli, Monica Gori, Gualtiero Volpe
Group-Level Focus of Visual Attention for Improved Active Speaker Detection Christopher Birmingham, Maja Mataric, Kalin Stefanov
Knock&Tap: Classification and Localization of Knock and Tap Gestures using Deep Sound Transfer Learning Jung-Hwa Kim, Jae-Yeop Jeong, Ha yeong Yoon, Jin-Woo Jeong
How Do HCI Researchers Describe Their Software Tools? Insights From a Synopsis Survey of Tools for Multimodal Interaction Mihail Terenti, Radu-Daniel Vatavu
Multisensor-Pipeline: A Lightweight, Flexible, and Extensible Framework for Building Multimodal-Multisensor Interfaces Michael Barz, Omair Shahzad Bhatti, Bengt Lüers, Alexander Prange, Daniel Sonntag
Detecting Face Touching with Dynamic Time Warping on Smartwatches: A Preliminary Study Yu-Peng Chen, Cehn Bai, Adam Wolach, Mamoun T. Mardini, Lisa Anthony
Predicting Worker Accuracy from Nonverbal Behaviour: Benefits and Potential for Algorithmic Bias Yuushi Toyoda, Gale Lucas, Jonathan Gratch
NLP-guided Video Thin-slicing for Automated Scoring of Non-Cognitive, Behavioral Performance Tasks Chee Wee Leong, Xianyang Chen, Vinay Basheerabad, Chong Min Lee, Patrick Houghton
Haply 2diy: An Accessible Haptic Plateform Suitable for Remote Learning Antoine Weill-Duflos, Nicholas Ong, Felix Desourdy, Benjamin Delbos, Steve Ding, Colin Gallacher
17:00-18:00 Demonstrations and Exhibits (Virtual) Dan Bohus and Brandon Booth
Multimodal Interaction in the Production Line - An OPC UA-based Framework for Injection Molding machinery Ferdinand Fuhrmann, Anna Weber, Stefan Ladstätter, Stefan Dietrich, Johannes Rella
Introducing an Integrated VR Sensor Suite and Cloud Platform Kai-min Kevin Chang, Yueran Yuan
Web-ECA: A Web-based ECA Platform Fumio Nihei, Yukiko I. Nakano
Combining Visual and Social Dialogue for Human-Robot Interaction Nancie Gunson, Daniel Hernandez Garcia, Jose L. Part, Yanchao Yu, Weronika Sieińska, Christian Dondrup, Oliver Lemon
Haply 2diy: An Accessible Haptic Plateform Suitable for Remote Learning Antoine Weill-Duflos, Nicholas Ong, Felix Desourdy, Benjamin Delbos, Steve Ding, Colin Gallacher
NLP-guided Video Thin-slicing for Automated Scoring of Non-Cognitive, Behavioral Performance Tasks Chee Wee Leong, Xianyang Chen, Vinay Basheerabad, Chong Min Lee, Patrick Houghton
The EMPATHIC Virtual Coach: A Demo Javier M. Olaso, Alain Vázquez, Jofre Tenorio-Laranga, Begoña Fernández-Ruanova, Eduardo González-Fraile, Kristin Beck Gjellesvik, Maria Stylianou Kornes, Anna Torp Johansen, Anna Esposito, Luigi Vinvitelli, Gennaro Cordasco,Aymen Mtibaa, Mohamed Amine Hman, Dijana Petrovska-Delacrétaz, Mikel de Velasco, Leila Ben Letaifa, Raquel Justo, Pau Buch-Cardona, Cristina Palmero, Sergio Escalera, César Montenegro, Asier López-Zorrilla, Roberto Santana, Jose Antonio Lozano, Olga Gordeeva, Olivier Deroo, Anaïs Fernández, Daria Kyslitska, Colin Pickard, Cornelius Glackin, Stephan Schlögl, Gérard Chollet, Gary Cahalane, María Inés Torres
18:00-20:00 Awards Banquet (Via Zoom) TBA
20:00-22:00 Repeat posters (Virtual) Nina Knieriemen
Day 3: Oct 21 Session/Paper Name Chair/Authors
8:00-9:00 Keynote: Susanne P. Lajoie (In person) Giovanna Varni and Guoying Zhao
Theory Driven Approaches to the Design of Multimodal Assessments of Learning, Emotion, and Self-Regulation in Medicine
9:00-10:20 Oral session "Behavioral Analytics and Applications" Hung-Hsuan Huang and Philippe Palanque
9:00-9:10 Conversational Group Detection with Graph Neural Networks Sydney Thompson, Abhijit Gupta, Anjali W. Gupta, Austin Chen, Marynel Vázquez
9:10-9:25 Attachment Recognition in School Age Children Based on Automatic Analysis of Facial Expressions and Nonverbal Vocal Behaviour Huda Alsofyani, Alessandro Vinciarelli
9:25-9:40 Characterizing Children's Motion Qualities: Implications for the Design of Motion Applications for Children Aishat Aloba, Lisa Anthony
9:40-9:55 Temporal Graph Convolutional Network for Multimodal Sentiment Analysis Jian Huang, Zehang Lin, Zhenguo Yang, Wenyin Liu
9:55-10:10 Toddler-Guidance Learning: Impacts of Critical Period on Multimodal AI Agents Junseok Park, Kwanyoung Park, Hyunseok Oh, Ganghun Lee, Minsu Lee, Youngki Lee, Byoung-Tak Zhang
10:10-10:20 Self-supervised Contrastive Learning of Multi-view Facial Expressions Shuvendu Roy, Ali Etemad
10:20-10:30 Break
10:30-12:00 Oral session "Multimodal Ethics, Interfaces and Applications" Mary Czerwinski and Mohamed Soleymani
10:30-10:45 Bias and Fairness in Multimodal Machine Learning: A Case Study of Automated Video Interviews Brandon M. Booth, Louis Hickman, Shree Krishna Subburaj, Louis Tay, Sang Eun Woo, Sidney K. D'Mello
10:45-11:00 🏆 Best Paper Nominee: Impact of the Size of Modules on Target Acquisition and Pursuit for Future Modular Shape-changing Physical User Interfaces Laura Pruszko, Yann Laurillau, Benoît Piranda, Julien Bourgeois, Céline Coutrix
11:00-11:15 Why Do I Have to Take Over Control? Evaluating Safe Handovers with Advance Notice and Explanations in HAD Frederik Wiehr, Anke Hirsch, Lukas Schmitz, Nina Knieriemen, Antonio Krüger, Alisa Kovtunova, Stefan Borgwardt, Ernie Chang, Vera Demberg, Marcel Steinmetz, Jörg Hoffmann
11:15-11:30 Technology as Infrastructure for Dehumanization: Three Hundred Million People with the Same Face Sharon Oviatt
11:30-11:45 Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech Abdullah Aman Tutul, Ehsanul Haque Nirjhar, Theodora Chaspari
11:45-12:00 🏆 Best Paper Nominee: What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor Attention* Halim Acosta, Nathan Henderson, Jonathan Rowe, Wookhee Min, James Minogue, James Lester
12:00-12:15 Break
12:15-13:05 ICMI Open Public Forum Zakia Hammal and Albert Ali Salah
13:05-13:15 Ending session
13:15-15:00 Lunch / Discussion
15:00-17:00 Posters (Hybrid) Carlos Busso
Deep Transfer Learning for Recognizing Functional Interactions via Head Movements in Multiparty Conversations Takashi Mori, Kazuhiro Otsuka
Gaze-based Multimodal Meaning Recovery for Noisy/Complex Environments Ozge Alacam, Eugen Ruppert, Ganeshan Malhotra, Chris Biemann
Semi-supervised Visual Feature Integration for Language Models through Sentence Visualization Lisai Zhang, Qingcai Chen, Joanna Siebert, Buzhou Tang
Speech Guided Disentangled Visual Representation Learning for Lip Reading Ya Zhao, Cheng Ma, Zunlei Feng, Mingli Song
Enhancing Ultrasound Haptics with Parametric Audio Effects Euan Freeman
Mass-deployable Smartphone-based Objective Hearing Screening with Otoacoustic Emissions Nils Heitmann, Thomas Rosner, Samarjit Chakraborty
Intra- and Inter-Contrastive Learning for Micro-expression Action Unit Detection Yante Li, Guoying Zhao
HEMVIP: Human Evaluation of Multiple Videos in Parallel Patrik Jonell, Youngwoo Yoon, Pieter Wolfert, Taras Kucherenko, Gustav Eje Henter
Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual Settings Ehsanul Haque Nirjhar, Amir H. Behzadan, Theodora Chaspari
Predicting Gaze from Egocentric Social Interaction Videos and IMU Data Sanket Kumar Thakur, Cigdem Beyan, Pietro Morerio, Alessio Del Bue
An Interpretable Approach to Hateful Meme Detection Tanvi Deshpande, Nitya Mani
Human-Guided Modality Informativeness for Affective States Torsten Wörtwein, Lisa B. Sheeber, Nicholas Allen, Jeffrey F. Cohn, Louis-Philippe Morency
Direct Gaze Triggers Higher Frequency of Gaze Change: An Automatic Analysis of Dyads in Unstructured Conversation Georgiana Cristina Dobre, Marco Gillies, Patrick Falk, Jamie A. Ward, Antonia F. de C. Hamilton, Xueni Pan
Online Study Reveals the Multimodal Effects of Discrete Auditory Cues in Moving Target Estimation Task Katsutoshi Masai, Akemi Kobayashi, Toshitaka Kimura
DynGeoNet: Fusion Network for Micro-expression Spotting Thuong-Khanh Tran, Quang-Nhat Vo, Guoying Zhao
Earthquake Response Drill Simulator based on a 3-DOF Motion base in Augmented Reality Namkyoo Kang, SeungJoon Kwon, JongChan Lee, Sang-Woo Seo
States of Confusion: Eye and Head Tracking Reveal Surgeons' Confusion during Arthroscopic Surgery Benedikt Hosp, Myat Su Yin, Peter Haddawy, Ratthaphum Watcharopas, Paphon Sa-Ngasoongsong, Enkelejda Kasneci
Personality Prediction with Cross-Modality Feature Projection Daisuke Kamisaka, Yuichi Ishikawa
Attention-based Multimodal Feature Fusion for Dance Motion Generation Kosmas Kritsis, Aggelos Gkiokas, Aggelos Pikrakis, Vassilis Katsouros
Multimodal Approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural Networks Yashish M. Siriwardena, Carol Espy-Wilson, Chris Kitchen, Deanna L. Kelly
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria
17:00-18:00 Break
18:00-20:00 Dinner
20:00-22:00 Repeat posters (Virtual) TBA

Detailed Program: Tutorial, Workshops and Doctoral Consortium (Oct 18, 22)

Oct 18 Half day Ethics Tutorial    
Time Session Name Paper title Authors
09:00-09:30 Part 1 General introduction to ethical issues in AI and Human-machine interaction Raja Chatila
09:30-10:00 Part 2 Ethics in Extended Reality Monique Morrow
10:00-10:30 Part 3 Philosophical perspective and issues in social robots Johanna Seibt
10:30-10:45 Break
10:45-11:15 Part 4 Ethics in Natural Language Processing Karën Fort
11:15-11:45 Part 5 Ethics in Human robot interaction and vulnerable persons Mohamed Chetouani and David Cohen
11:45-12:15 Final discussion and closing remarks
Oct 18 Full day Doctoral Consortium (Via Zoom)  
Time Session Name Paper title Authors
09:00 - 09:30 Invited talk: Sean Andrist Situated Interaction with Socially Intelligent Systems Sean Andrist
09:30 - 10:50 DC talks Using Generative Adversarial Networks to Create Graphical User Interfaces for Video Games Christopher Acornley
Semi-Supervised Learning for Multimodal Speech and Emotion Recognition Yuanchao Li
Development of an Interactive Human/Agent Loop using Multimodal Recurrent Neural Networks Jieyeon Woo
Assisted End-User Robot Programming Gopika Ajaykumar
10:50 - 11:10 Break
11:10 - 12:30 DC talks What if I Interrupt You Liu Yang
Natural Language Stage of Change Modelling for “Motivationally-driven” Weight Loss Support Selina Meyer
Photogrammetry-based VR Interactive Pedagogical Agent for K12 Education Laduona Dai
Understanding Personalised Auditory-Visual Associations in Multi-Modal Interactions Patrick O'Toole
12:30 - 12:40 Break
12:40 - 13:40 Panel Discussion (general PhD advice from Tutors for the participants)
13:40 - 14:00 Closing
Oct 18 Half Day The Second International Workshop on Automated Assessment of Pain AAP
Time Session Name Paper title Authors
7:00 - 7:05 Opening
7:05 - 7:45 Keynote: Ken Prkachin Behavioural perspectives on automated pain assessment: forty years in the trenches
7:45 - 8:00 Break
8:00- 8:10 Towards Chatbot-Supported Self-Reporting for Increased Reliability and Richness of Ground Truth for Automatic Pain Recognition: Reflections on Long-Distance Runners and People with Chronic Pain Tao Bi, Raffaele Buono
8:10 - 8:50 Panellists short keynotes Prof. Amanda CdC Williams (Unversity College London - UK) Pain and chronic pain in animals; Dr Marwa Mahmoud (Cambridge University - UK) Automatical detection of pain in non-human animals; Prof. Lola Canamero (CY Cergy Paris University , France)  Modeling and understanding pain and mood in robots Amanda Williams et al.
8:50 - 9:20 Panel or round table Pain in humans and other animals and the role and design of ethical technology
9:20 - 9:30 Closing remarks
Oct 18 Full Day Insights on Group & Team Dynamics  
Time Session Name Paper title Authors
9:00 - 09:15 Opening Hayley Hung
09:15 - 10:00 Keynote: Scott Poole A Social Media Based Decision Support System: Combining Participant Input with Interaction Analytics in Decision Making
10:00 - 10:40 On the Sound of Successful Meetings: How Speech Prosody predicts Meeting Performance Oliver Niebuhr, Ronald Böck and Joseph A. Allen.
Self-assessed Emotion Classification from Acoustic and Physiological Features within Small-group Conversation Woan-Shiuan Chien, Huang-Cheng Chou and Chi-Chun Lee.
10:40 - 11:00 Break
11:00 - 11:50 A Hitchhiker's Guide towards Transactive Memory System Modeling in Small Group Interactions Enzo Tartaglione, Maurizio Mancini, Beatrice Biancardi and Giovanna Varni.
Discovering where we excel: Investigating the mechanism of inclusive turn-taking in teams Ki-Won Haan, Christoph Riedl and Anita Woolley.
Clustering and Multimodal Analysis of Participants in Task-Based Discussions David Johnson and Gabriel Murray.
11:50 - 12:10 Break
12:10 - 12:55 Keynote: Giovanna Varni A look at automated groups’ analysis
12:55 - 13:35 Break
13:35 - 14:30 An Exploratory Computational Study on the Effect of Emergent Leadership on Social and Task Cohesion Soumaya Sabry, Lucien Maman and Giovanna Varni.
Belongingness and Satisfaction Recognition from Physiological Synchrony with A Group-Modulated Attentive BLSTM under Small-group Conversation Woan-Shiuan Chien, Huang-Cheng Chou and Chi-Chun Lee. 
Get Together in the Middle-earth: a First Step Towards Hybrid Intelligence Systems Giovanna Varni, André-Marie Pez and Maurizio Mancini.
14:30 - 14:40 Break
14:40 - 15:25 Plenary Discussion: Bottlenecks to Bridging the Gap / Happy hour
15:25 - 15:40 Closing / Going for Coffee for in person participants
Oct 18 Full Day CATS2021: International Workshop on Corpora and Tools for Social Skills Annotation
Time Session Name Paper title Authors
9:00-9:10 Welcome and Workshop introduction
9:10-10:10 Keynote: Tobias Baur
10:10-10:55 Oral presentations - Annotations A Development of a Multimodal Behavior Analysis System for Evaluating Dementia Care Interaction Shogo Ishikawa, Masashi Onozuka, Atsushi Omata, Ayumi Nakanome, Sota Kayama and Shinya Kiriyama
An Opportunity to Investigate the Role of Specific Nonverbal Cues and First Impression in Interviews using Deepfake Based Controlled Video Generation Rahil Vijay, Kumar Shubham, Laetitia Renier, Emmanuelle Kleinlogel, Marianne Schmid Mast and Dinesh Babu Jayagopi
Setting Up a Health-related Quality of Life Vocabulary Paula Alexandra Silva and Renato Santos
10:55-11:30 q/a + Panel - Annotations
11:30-11:50 Break
11:50-12:50 Keynote: Daniel Gatica-Perez
12:50-13:40 Break
13:40-14:40 Keynote: Laura Cabrera-Quiròs
14:40-15:40 Oral presentations - Datasets ChiCo: A Multimodal Corpus for the Study of Child Conversation Kübra Bodur, Mitja Nikolaus, Fatima Kassim, Laurent Prévot and Abdellah Fourtassi
IdlePose : a dataset of spontaneous idle motions Brian Ravenet
Making Automatic Movement Features Extraction Suitable for Non-engineer Students Nicola Corbellini and Gualtiero Volpe
A Systematic Review on Dyadic Conversation Visualizations Joshua Kim, Rafael Calvo, Nick Enfield and Kalina Yacef
15:40-16:20 q/a + Panel - Datasets
16:20-16:40 Break
16:40-17:30 Closing + Mini Networking
Oct 18 Full day Workshop on Multimodal Affect and Aesthetic Experience  
Time Session Name Paper title Authors
08:00-08:15 Opening Remarks
08:15-09:15 Keynote: Sarah Kenderdine  TBA
09:15-09:45 paper 1 Multimodal Assessment of Network Music Performance Konstantinos Tsioutas, Konstantinos Ratzos, George Xylomenos and Ioannis Doumanis
09:45-09:55 Break
09:55-10:55 Keynote: Marinos Koutsomichalis TBA
10:55-11:25 paper 2 When Emotions are Triggered by Single Musical Notes: Revealing the Underlying Factors of Auditory-Emotion Associations Patrick O'Toole, Donald Glowinski, Ian Pitt and Maurizio Mancini
11:25-11:35 Break
11:35-12:35 Keynote: Florence Dozol TBA
12:35-13:05 paper 3 ArtBeat - Deep Convolutional Networks for Emotional Inference to Enhance Art with Music Liam Hebert, Elizabeth Eddy, Will Harrington, Lauryn Marchand, Jason d'Eon and Sageev Oore
13:05-13:35 Discussion/Closing Remarks
Oct 18 Full day The 6th International Workshop on Affective Social Multimedia Computing (ASMMC 2021)
Time Session Name Paper title Authors
08.00-08.15 Opening statement: Youjun Xiong
08.15-09.15 Keynote: Jin Qin Multimodal Emotion Recognition
Multimodal Emotion Recognition
09.15-09.05 FER by Modeling the Conditional Independence between the Spatial Cues and the Spatial Attention Distributions Wan Ding, Dongyan Huang, Jingjun Liang, Jinlong Jiao and Zhiping Zhao
10.05-10.25 Efficient Gradient-based Neural Architecture Search for end-to-end ASR Xian Shi, Pan Zhou, Wei Chen and Lei Xie
10.25-10.45 Temporal Attentive Adversarial Domain Adaption for Cross Cultural Affect Recognition Haifeng Chen, Yifan Deng and Dongmei Jiang
10.45-11.05 A Multimodal Dynamic Neural Network for Call for Help Recognition in Elevators Ran Ju, Huangrui Chu, Yechen Wang, Qi Deng, Ming Cheng and Ming Li
11.05-11.25 A Web-Based Longitudinal Mental Health Monitoring System Zhiwei Chen, Weizhao Yang, Jinrong Li, Jiale Wang, Shuai Li, Ziwen Wang and Lei Xie
11.25-11.35 Break (10 min)
Multimodal Emotion Synthesis
11.35-11.55 Semantic and Acoustic-Prosodic Entrainment of Dialogues in Service Scenarios Liu Yuning, Jianwu Dang, Aijun Li and Di Zhou
11.55-12.15 Improving Model Stability and Training Efficiency in Fast Speed High Quality Expressive Voice Conversion System Zhiyuan zhao, jingjun liang, zehong zheng, linhuang yan, zhiyong yang, wan ding, and dongyan huang
11.55-12.15 TeNC: Low Bit-Rate Speech Coding with VQ-VAE and GAN Yi Chen, Shan Yang, Na Hu, Lei Xie and Dan Su
12.15-12.35 Noise Robust Singing Voice Synthesis Using Gaussian Mixture Variational Autoencoder Heyang Xue, Xiao Zhang, Jie Wu, Jian Luan, Yujun Wang and Lei Xie
12.35-12.45 Break (10 min)
12.45-13.45 Keynote: Erik Cambria Neurosymbolic AI for Affective Computing and Sentiment Analysis
13.45-13.55 Break (10 min)
Sentiment, Micro-expression and Paralinguistic analysis
13.55-14.15 BERT Based Cross-Task Sentiment Analysis with Adversarial Learning Zhiwei He, Xiangmin Xu, Xiaofen Xing and Yirong Chen
14.15-14.35 Facial Micro-Expression Recognition Based on Multi-Scale Temporal and Spatial Features Hao Zhang, Bin Liu, Jianhua Tao and Zhao Lv
14.35-14.55 Aspect based sentiment analysis is a branch of sentiment analysis Yingtao Huo, Dongmei Jiang and Hichem Sahli
14.55-15.15 Call For Help Detection In Emergent Situations Using Keyword Spotting And Paralinguistic Analysis Huangrui Chu, Yechen Wang, Ran Ju, Yan Jia, Haoxu Wang, Ming Li and Qi Deng
15.15-15.40 Panel discussion and closing remarks
10.35-10.55 Computational Measurement of Motor Imitation and Imitative Learning Differences in Autism Spectrum Disorder Casey J. Zampella, Evangelos Sariyanidi, Anne G. Hutchinson, G. Keith Bartley, Robert T. Schultz and Birkan Tunc
10.55-11.15 Listen to the Real Experts: Detecting Need of Caregiver Response in a NICU using Multimodal Monitoring Signals Laura Cabrera-Quiros, Gabriele Varisco, Zhuozhao Zhan, Xi Long, Peter Andriessen, Eduardus J. E. Cottaar and Carola van Pul
11.15-11.35 Differentiating Surgeons' Expertise solely by Eye Movement Features Benedikt Hosp, Myat Su Yin, Peter Haddawy and Enkelejda Kasneci
11.35-12.00 Break
12.00-13.00 Keynote: Stefan Scherer TBA
13.00-14.00 Keynote: Laurel Riek TBA
14.00-15.00 Panel and closing remarks
Oct 22 Half day 2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)
Time Session Name Paper title Authors
8:00-8:10 Opening remarks by workshop organizers
8:10-8:50 Keynote 1: Dinesh Babu Jayagopi Multimodal Analysis and Synthesis for Conversational Research  
8:50-9:05 Paper 1 Social Robots to Support Gesture Imitation in Children with ASD Berardina Nadja De Carolis, Nicola Macchiarulo, Francesca D'Errico, Giuseppe Palestra
9:05-9:20 Paper 2 "You made me feel this way": Investigating Partners' Influence in Predicting Emotions in Couples' Conflict Interactions using Speech Data George Boateng, Peter Hilpert, Guy Bodenmann, Mona Neysari, Tobias Kowatsch
9:20-9:35 Paper 3 BERT meets LIWC: Exploring State-of-the-Art Language Models for Predicting Communication Behavior in Couples’ Conflict Interactions Jacopo Biggiogera, George Boateng, Peter Hilpert, Matthew Vowels, Guy Bodenmann, Mona Neysari, Fridtjof Nussbeck, Tobias Kowatsch
9:35-9:50 Break
9:50-10:30 Kyenote 2: Takashi Kudo The Point of Action where Cognitive Behavioral Therapy is Effective  
10:30-10:45 Paper 4 A Framework for the Assessment and Training of Collaborative Problem-Solving Social Skills Jennifer Hamet Bagnou, Elise Prigent, Jean-Claude Martin, Jieyeon Woo, Liu Yang, Catherine Achard, Catherine Pelachaud, Celine Clavel
10:45-11:00 Paper 5 Multimodal Dataset of Social Skills Training in Natural Conversational Setting Takeshi Saga, Hiroki Tanaka, Hidemi Iwasaka, Yasuhiro Matsuda, Tsubasa Morimoto, Mitsuhiro Uratani, Kosuke Okazaki, Yuichiro Fujimoto, Satoshi Nakamura
11:00-11:30 Open discussion and closing remark
Oct 22 Half day Workshop on modelling socio-emotional and cognitive processes from multimodal data in the wild
Time Session Name Paper title Authors
8:00 - 8:10 Opening remarks by workshop organizers
8:10 - 9:00 Keynote: Valeria Villani A Framework for Affect-Based Natural Human-Robot Interaction
9:00 - 9:20 Paper 1 Clustering of Physiological Signals by Emotional State, Race and Sex Tempsett Neal, Khadija Zanna, Shaun Canavan
9:20 - 9:30 Short Paper 1 Mindscape: Transforming Multimodal Physiological Signals into an Application Specific Reference Frame Frederic Simard, Sayeed Kizuk, Pascal Fortin
9:30 - 9:50 Paper 2 Addressing data scarcity in multimodal user state recorgnition by combining semi-supervised and supervised learning Hendrik Voß, Heiko Wersing, Stefan Kopp
9:50 - 10:00 Short Paper 2 Neuromuscular Performance and Injury Risk Assessment Using Fusion of Multimodal Biophysical and Cognitive Data Ehsan Sobhani, Kian Jalaleddini, Nerea Urrestilla, Rachid Aissaoui, David St-Onge
10:00 - 10:20 Coffee Break
10:20 - 10:40 Paper 3 Meta-learning for Emotion Prediction from EEG while Listening to Music  Kana Miyamoto, Hiroki Tanaka, Satoshi Nakamura
10:40 - 10:50 Short Paper 3 Towards Human-in-the Loop Autonomous Multi-Robot Operations Marcel Kaufmann, Katherine Sheridan, Giovanni Beltrame
10:50 - 11:10 Paper 4 Towards Reliable Multimodal Stress Detection under Distribution Shift Andreas Foltyn, Jessica Deuschel
11:10 - 11:30 Panel discussion and closing remarks 
Oct 22 Half day 2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour
Time Session Name Paper title Authors
09:00-09:05 EDT Opening and Keynote Introduction
09:05-10:00 EDT Keynote: Alessandro Vinciarelli Attachment Recognition in School Age Children
10:00-10:05 EDT Keynote Introduction
10:05-11:00 EDT Keynote: Sibel Halfon Mentalization Characteristics of School-Age Children with Clinical Problems
11:00-11:10 EDT Break
11:11-11:25 EDT Automatic Analysis of Infant Engagement During Play: An End-to-End Learning and Explainable AI Pilot Experiment Marc Fraile, Joakim Lindblad, Christine Fawcett, Nataša Sladoje, Ginevra Castellano
11:25-11:45 EDT Recording the Speech of Children with Atypical Development: Peculiarities and Perspectives Elena Lyakso, Olga Frolova
11:45-12:00 EDT Measuring Frequency of Child-directed WH-Question Words for Alternate Preschool Locations using Speech Recognition and Location Tracking Technologies Prasanna Kothalkar, Sathvik Datla, Satwik Dutta, Yagmur Seven, Dwight Irvin, Jay Buzhardt, John Hansen
Oct 22 Half day GENEA Workshop 2021:
Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
Time Session Name Paper title Authors
08:00 - 08:10 Opening statement
08:10 - 08:50 Keynote: Hatice Gunes Data-driven Robot Social Intelligence
8:50 - 9:05 Paper 1 Probabilistic Human-like Gesture Synthesis from Speech using GRU-based WGAN Bowen Wu, Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro
9:05 - 9:20 Paper 2 Influence of Movement Energy and Affect Priming on the Perception of Virtual Characters Extroversion and Mood Tanja Schneeberger, Fatima Ayman Aly, Daksitha Withanage Don, Katharina Gies, Zita Zeimer, Fabrizio Nunnari, Patrick Gebhard
9:20 - 09:35 Paper 3 Crossmodal clustered contrastive learning: Grounding of spoken language to gesture Dong Won Lee, Chaitanya Ahuja, Louis-Philippe Morency
09:35 - 09:50 Break
09:50 - 10:30 Keynote: Louis-Philippe Morency Multimodal AI: Learning Nonverbal Signatures
10:30 - 10:45 Break
10:45 - 10:50 Reproducibility Award announcement
10:50 - 11:50 Group discussions
11:50 - 11:55 Closing remarks
Oct 22 Half day Empowering Interactive Robots by Learning Through Multimodal Feedback Channels    
08:00-08:10 Opening Remarks
08:10-08:50 Keynote: Judith Holler TBA
08:50-09:05 Paper 1 When a Voice Assistant Asks for Feedback: An Empirical Study on Customer Experience with A/B Testing and Causal Inference Methods Yuqi Deng, Sudeeksha Murari
09:05-09:20 Break
09:20-10:00 Keynote: Georgia Chalvatzaki TBA
10:00-10:15 Paper 2 Uncertainties Based Queries for Interactive Policy Learning with Evaluations and Corrections Carlos Celemin, Jens Kober
10:30-10:45 Break
10:45-11:25 Keynote: Heni ben Amor TBA
11:25-11:55 Panel Discussion/Breakout rooms
11:55-12:00 Closing

ICMI 2021 ACM International Conference on Multimodal Interaction. Copyright © 2020-2025