|
|
ICMI 2020 Conference Program
Global program main conference
We aimed for a "prime time" to accommodate participants from all over the world
Keynotes will be presented live
In the Best paper nominee session, the presentation video for each paper will be played and followed by live Q&A
Paper sessions consist of live Q&A
Social events will be live (and fun!)
Additional information
The keynotes will be presented live through Zoom by the keynote speakers and will be recorded. The Q&A session following directly after the keynote will also be recorded. Interactions with the audience will be handled in an anonymous way.
We will use Ryver as a text-based communication platform during the conference. Through Ryver, participants can chat and ask questions to the presenters. This communication platform will be opened a week before the conference and will remain open until a week after the conference. The chats will be removed with the closing of the platform. Only registered participants of ICMI2020 have access to this platform.
Participation guidelines on Zoom and Ryver can be found here.
The timezone of the schedule is UTC+1 *

* Please pay attention to the times. The time of the schedule is in CET time (equal to UTC+1) which is effective from October 25, 2020 onwards (parts of Europe move into winter time). If you are unsure about the time conversion, please try https://www.worldtimebuddy.com/ or have a look at the table below.
CET time effective from Oct 25, 2020 |
Los Angeles, US |
UTC-7 |
5:00 |
6:00 |
9:00 |
11:00 |
New York, US |
UTC-4 |
8:00 |
9:00 |
12:00 |
14:00 |
London, UK |
UTC |
12:00 |
13:00 |
16:00 |
18:00 |
Utrecht, NL |
CET/UTC+1 |
13:00 |
14:00 |
17:00 |
19:00 |
Berlin, Germany |
UTC+1 |
13:00 |
14:00 |
17:00 |
19:00 |
Beijing, China |
UTC+8 |
20:00 |
21:00 |
0:00 |
2:00 |
Tokyo, Japan |
UTC+9 |
21:00 |
22:00 |
1:00 |
3:00 |
Sydney, Australia |
UTC+11 |
23:00 |
0:00 |
3:00 |
5:00 |
ICMI 2020 Conference Program (all times in CET/UTC+1)
Sunday, 25 October 2020
Workshops, Doctoral Consortium, and Grand Challenge
Monday, 26 October 2020
14:15 - 15:00 |
Conference Opening & Virtual Tour Utrecht |
15:00 - 16:00 |
Keynote 1:
From hands to brains: How does human body talk, think and interact in face-to-face language use?
Prof. Dr. Asli Ozyurek
|
16:00-16:45 |
Session 1a: Technical papers (Long, Short), DC papers (DC), and Demo papers (Demo)
|
|
Estimating the Intensity of Facial Expressions Accompanying Feedback Responses in Multiparty Video-Mediated Communication (Long)
Ryosuke Ueno, Yukiko Nakano, Jie Zeng, Fumio Nihei
Leniency to those who confess?: Predicting the Legal Judgement via Multi-Modal Analysis (Short)
Liang Yang, Jingjie Zeng, Tao Peng, Xi Luo, Jinghui Zhang, Hongfei Lin
LDNN: Linguistic Knowledge Injectable Deep Neural Network for Group Cohesiveness Understanding (Long)
Yanan Wang, Jianming Wu, Jinfa Huang, Gen Hattori, Yasuhiro Takishima, Shinya Wada, Rui Kimura, Jie Chen, Satoshi Kurihara
Job Interviewer Android with Elaborate Follow-up Question Generation (Long)
Koji Inoue, Kohei Hara, Divesh Lala, Kenta Yamamoto, Shizuka Nakamura, Katsuya Takanashi, Tatsuya Kawahara
You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing (Long)
Abdul Rafey Aftab, Michael von der Beeck, Michael Feld
PiHearts: Resonating Experiences of Self and Others Enabled by a Tangible Somaesthetic Design (Long)
Ilhan Aslan, Andreas Seiderer, Chi Tai Dang, Simon Rädler, Elisabeth André
Gaze Tracker Accuracy and Precision Measurements in Virtual Reality Headsets (Short)
Jari Kangas, Olli Koskinen, Roope Raisamo
Facilitating Flexible Force Feedback Design with Feelix (Long)
Anke van Oosterhout, Miguel Bruns, Eve Hoggan
Detecting Depression in Less Than 10 Seconds: Impact of Speaking Time on Depression Detection Sensitivity (Long)
Nujud Aloshban, Anna Esposito, Alessandro Vinciarelli
Towards Engagement Recognition of People with Dementia in Care Settings (Long)
Lars Steinert, Felix Putze, Dennis Küster, Tanja Schultz
Detection of Listener Uncertainty in Robot-Led Second Language Conversation Practice (Short)
Ronald Cumbal, José Lopes, Olov Engwall
Combining Auditory and Mid-Air Haptic Feedback for a Light Switch Button (Long)
Cisem Ozkul, David Geerts, Isa Rutten
Did the Children Behave?: Investigating the Relationship Between Attachment Condition and Child Computer Interaction (Long)
Dong Bach Vo, Stephen Brewster, Alessandro Vinciarelli
Personalised Human Device Interaction through Context aware Augmented Reality (DC)
Madhawa Perera
Detection of Micro-expression Recognition Based on Spatio-Temporal Modelling and Spatial Attention (DC)
Mengjiong Bai
Robot Assisted Diagnosis of Autism in Children (DC)
B Ashwini
How to Complement Learning Analytics with Smartwatches?: Fusing Physical Activities, Environmental Context, and Learning Activities (DC)
George-Petru Ciordas-Hertel
Multimodal Physiological Synchrony as Measure of Attentional Engagement (DC)
Ivo Stuldreher
Multimodal Groups' Analysis for Automated Cohesion Estimation (DC)
Lucien Maman
Alfie: An Interactive Robot with Moral Compass (Demo)
Cigdem Turan, Patrick Schramowski, Constantin Rothkopf, Kristian Kersting
Spark Creativity by Speaking Enthusiastically: Communication Training using an E-Coach (Demo)
Carla Viegas, Albert Lu, Annabel Su, Carter Strear, Yi Xu, Albert Topdjian, Daniel Limon, J.J. Xu
FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment (Demo)
Alejandro Peña, Ignacio Serna, Aythami Morales, Julian Fierrez
|
16:45 - 17:00 |
Break |
17:00 - 17:45 |
Session 1b: Technical papers (Long, Short), DC papers (DC), and Demo papers (Demo)
|
|
MMGatorAuth: A Novel Multimodal Dataset for Authentication Interactions in Gesture and Voice (Long)
Sarah Morrison-Smith, Aishat Aloba, Hangwei Lu, Brett Benda, Shaghayegh Esmaeili, Gianne Flores, Jesse Smith, Nikita Soni, Isaac Wang, Rejin Joy, Damon Woodard, Jaime Ruiz, Lisa Anthony
OpenSense: A Platform for Multimodal Data Acquisition and Behavior Perception (Short)
Kalin Stefanov, Baiyu Huang, Zongjian Li, Mohammad Soleymani
FilterJoint: Toward an Understanding of Whole-Body Gesture Articulation (Long)
Aishat Aloba, Julia Woodward, Lisa Anthony
Fifty Shades of Green: Towards a Robust Measure of Inter-annotator Agreement for Continuous Signals (Long)
Brandon Booth, Shrikanth Narayanan
StrategicReading: Understanding Complex Mobile Reading Strategies via Implicit Behavior Sensing (Long)
Wei Guo, Byeong-Young Cho, Jingtao Wang
Personalized Modeling of Real-World Vocalizations from Nonverbal Individuals (Short)
Jaya Narain, Kristina Johnson, Craig Ferguson, Amanda O'Brien, Tanya Talkar, Yue Weninger, Peter Wofford, Thomas Quatieri, Rosalind Picard, Pattie Maes
Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue Scenarios (Long)
Sami Alperen Akgun, Moojan Ghafurian, Mark Crowley, Kerstin Dautenhahn
A Multi-modal System to Assess Cognition in Children from their Physical Movements (Long)
Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Ashish Jaiswal, Alexis Lueckenhoff, Maria Kyrarini, Fillia Makedon
MORSE: MultimOdal sentiment analysis for Real-life SEttings (Long)
Yiqun Yao, Verónica Pérez-Rosas, Mohamed Abouelenien, Mihai Burzo
Preserving Privacy in Image-based Emotion Recognition through User Anonymization (Long)
Vansh Narula, Kexin Feng, Theodora Chaspari
Predicting the Effectiveness of Systematic Desensitization Through Virtual Reality for Mitigating Public Speaking Anxiety (Short)
Margaret von Ebers, Ehsanul Haque Nirjhar, Amir Behzadan, Theodora Chaspari
Early Prediction of Visitor Engagement in Science Museums with Multimodal Learning Analytics (Long)
Andrew Emerson, Nathan Henderson, Jonathan Rowe, Wookhee Min, Seung Lee, James Minogue, James Lester
FeetBack: Augmenting Robotic Telepresence with Haptic Feedback on the Feet (Long)
Brennan Jones, Jens Maiero, Alireza Mogharrab, Ivan Aguliar, Ashu Adhikari, Bernhard Riecke, Ernst Kruijff, Carman Neustaedter, Robert Lindeman
Towards Real-Time Multimodal Emotion Recognition among Couples (DC)
George Boateng
Towards Multimodal Human-Like Characteristics and Expressive Visual Prosody in Virtual Agents (DC)
Mireille Fares
Towards a Multimodal and Context-Aware Framework for Human Navigational Intent Inference (DC)
Zhitian Zhang
Automating Facilitation and Documentation of Collaborative Ideation Processes (DC)
Matthias Merk
Supporting Instructors to Provide Emotional and Instructional Scaffolding for English Language Learners through Biosensor-based Feedback (DC)
Heera Lee
Zero-Shot Learning for Gesture Recognition (DC)
Naveen Madapana
LieCatcher: Game Framework for Collecting Human Judgments of Deceptive Speech (Demo)
Sarah Levitan, Xinyue Tan, Julia Hirschberg
Platform for Situated Intelligence (Exhibit)
Dan Bohus, Sean Andrist
The AI-Medic: A Multimodal Artificial Intelligent Mentor for Trauma Surgery (Demo)
Edgar Rojas-Muñoz, Kyle Couperus, Juan Wachs
|
17:45 - 18:45 |
Oral 1: Best Paper Nominees
|
|
Dyadic Speech-based Affect Recognition using DAMI-P2C Parent-child Multimodal Interaction Dataset (Long)
Huili Chen, Yue Zhang, Felix Weninger, Rosalind Picard, Cynthia Breazeal, Hae Won Park
Introducing Representations of Facial Affect in Automated Multimodal Deception Detection (Long)
Leena Mathur, Maja Matarić
Toward Multimodal Modeling of Emotional Expressiveness (Long)
Victoria Lin, Jeffrey Girard, Michael Sayette, Louis-Philippe Morency
SmellControl: The Study of Sense of Agency in Smell (Long)
Patricia Cornelio, Emanuela Maggioni, Giada Brianza, Sriram Subramanian, Marianna Obrist
|
18:45 - 19:30 |
Virtual Banquet & Pub Quiz
Join us in the Virtual Banquet or Pub Quiz!
|
Tuesday, 27 October 2020
14:15 - 15:00 |
Breakout Discussions
Stay tuned for more info!
|
15:00 - 16:00 |
Oral 2: Best Paper Nominees
|
|
Gesticulator: A Framework for Semantically-aware Speech-driven Gesture Generation (Long)
Taras Kucherenko, Patrik Jonell, Sanne van Waveren, Gustav Henter, Simon Alexandersson, Iolanda Leite, Hedvig Kjellström
Finally on Par?: ! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality (Long)
Chris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik
A Neural Architecture for Detecting User Confusion in Eye-tracking Data (Long)
Shane Sims, Cristina Conati
The WoNoWa Dataset: Investigating the Transactive Memory System in Small Group Interactions (Long)
Beatrice Biancardi, Lou Maisonnave-Couterou, Pierrick Renault, Brian Ravenet, Maurizio Mancini, Giovanna Varni
|
16:00 - 16:45 |
Session 2a: Technical papers (Long, Short), and Late Breaking Results papers (LBR)
|
|
Is She Truly Enjoying the Conversation?: Analysis of Physiological Signals toward Adaptive Dialogue Systems (Long)
Shun Katada, Shogo Okada, Yuki Hirano, Kazunori Komatani
Gesture Enhanced Comprehension of Ambiguous Human-to-Robot Instructions (Long)
Dulanga Weerakoon, Vigneshwaran Subbaraju, Nipuni Karumpulli, Tuan Tran, Qianli Xu, U-Xuan Tan, Joo Hwee Lim, Archan Misra
Analyzing Nonverbal Behaviors along with Praising (Short)
Toshiki Onishi, Arisa Yamauchi, Ryo Ishii, Yushi Aono, Akihiro Miyata
LASO: Exploiting Locomotive and Acoustic Signatures over the Edge to Annotate IMU Data for Human Activity Recognition (Long)
Soumyajit Chatterjee, Avijoy Chakma, Aryya Gangopadhyay, Nirmalya Roy, Bivas Mitra, Sandip Chakraborty
Hand-eye Coordination for Textual Difficulty Detection in Text Summarization (Long)
Jun Wang, Grace Ngai, Hong Va Leong
Understanding Applicants' Reactions to Asynchronous Video Interviews Through Self-reports and Nonverbal Cues (Long)
Skanda Muralidhar, Emmanuelle Kleinlogel, Eric Mayor, Marianne Schmid Mast, Adrian Bangerter, Daniel Gatica-Perez
Exploring Personal Memories and Video Content as Context for Facial Behavior in Predictions of Video-Induced Emotions (Long)
Bernd Dudzik, Joost Broekens, Mark Neerincx, Hayley Hung
Purring Wheel: Thermal and Vibrotactile Notifications on the Steering Wheel (Long)
Patrizia Di Campli San Vito, Stephen Brewster, Frank Pollick, Simon Thompson, Lee Skrypchuk, Alexandros Mouzakitis
Analysis of Face-Touching Behavior in Large Scale Social Interaction Dataset (Long)
Cigdem Beyan, Matteo Bustreo, Muhammad Shahid, Gian Luca Bailo, Nicolo Carissimi, Alessio Del Bue
Eliciting Emotion with Vibrotactile Stimuli Evocative of Real-World Sensations (Long)
Shaun Macdonald, Stephen Brewster, Frank Pollick
Multimodal Gated Information Fusion for Emotion Recognition from EEG Signals and Facial Behaviors (Short)
Soheil Rayatdoost, David Rudrauf, Mohammad Soleymani
The iCub Multisensor Datasets for Robot and Computer Vision Applications (Short)
Murat Kirtay, Ugo Albanese, Lorenzo Vannucci, Guido Schillaci, Cecilia Laschi, Egidio Falotico
Investigating LSTM for Micro-Expression Recognition (LBR)
Mengjiong Bai, Roland Goecke
Preliminary Report of Visually Impaired User Experience using a 3D-Enhanced Facility Management System for Indoors Navigation (LBR)
Eduardo Benitez Sandoval, Binghao Li, Kai Zhao, Abdoulaye Diakite, Sisi Zlatanova, Nicholas Oliver, Tomasz Bednarz
Prediction of Shared Laughter for Human-Robot Dialogue (LBR)
Divesh Lala, Koji Inoue, Tatsuya Kawahara
Engagement Analysis of ADHD students using visual cues from Eye tracker (LBR)
Harshit Chauhan, Anmol Prasad, Jainendra Shukla
It's Not What They Play, It's What You Hear: Understanding Perceived vs. Induced Emotions in Hindustani Classical Music (LBR)
Amogh Gulati, Brihi Joshi, Chirag Jain, Jainendra Shukla
Multimodal Self-Assessed Personality Prediction in the Wild (LBR)
Daisuke Kamisaka, Yuichi Ishikawa
Toward Mathematical Representation of Emotion: A Deep Multitask Learning Method Based On Multimodal Recognition (LBR)
Seiichi Harata, Takuto Sakuma, Shohei Kato
The Cross-modal Congruency Effect as an Objective Measure of Embodiment (LBR)
Pim Verhagen, Irene Kuling, Kaj Gijsbertse, Ivo V. Stuldreher, Krista Overvliet, Sara Falcone, Jan Van Erp, Anne-Marie Brouwer
mEBAL: A Multimodal Database for Eye Blink Detection and Attention Level Estimation (LBR)
Roberto Daza, Aythami Morales, Julian Fierrez, Ruben Tolosana
Music-Driven Animation Generation of Expressive Musical Gestures (LBR)
Alysha Bogaers, Zerrin Yumak, Anja Volk
ET-CycleGAN: Generating Thermal Images from Images in the Visible Spectrum for Facial Emotion Recognition (LBR)
Gerard Pons Rodriguez, Abdallah El Ali, Pablo Cesar
|
16:45 - 17:00 |
Break
|
17:00 - 17:45 |
Session 2b: Technical papers (Long, Short), Late Breaking Results papers (LBR)
|
|
Modality Dropout for Improved Performance-driven Talking Faces (Long)
Ahmed Hussen Abdelaziz, Barry-John Theobald, Paul Dixon, Reinhard Knothe, Nicholas Apostoloff, Sachin Kajareker
Mitigating Biases in Multimodal Personality Assessment (Long)
Shen Yan, Di Huang, Mohammad Soleymani
Enhancing Affect Detection in Game-Based Learning Environments with Multimodal Conditional Generative Modeling (Long)
Nathan Henderson, Wookhee Min, Jonathan Rowe, James Lester
Multimodal Automatic Coding of Client Behavior in Motivational Interviewing (Long)
Leili Tavabi, Kalin Stefanov, Larry Zhang, Brian Borsari, Joshua Woolley, Stefan Scherer, Mohammad Soleymani
Effect of Modality on Human and Machine Scoring of Presentation Videos (Short)
Haley Lepp, Chee Wee Leong, Katrina Roohr, Michelle Martin-Raugh, Vikram Ramanarayanan
Automated Time Synchronization of Cough Events from Multimodal Sensors in Mobile Devices (Short)
Tousif Ahmed, Mohsin Ahmed, Md Mahbubur Rahman, Ebrahim Nemati, Bashima Islam, Korosh Vatanparvar, Viswam Nathan, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao
Examining the Link between Children’s Cognitive Development and Touchscreen Interaction Patterns (Short)
Ziyang Chen, Yu-Peng Chen, Alex Shaw, Aishat Aloba, Pasha Antonenko, Jaime Ruiz, Lisa Anthony
Temporal Attention and Consistency Measuring for Video Question Answering (Long)
Lingyu Zhang, Richard Radke
Speaker-Invariant Adversarial Domain Adaptation for Emotion Recognition (Long)
Yufeng Yin, Baiyu Huang, Yizhen Wu, Mohammad Soleymani
Depression Severity Assessment for Adolescents at High Risk of Mental Disorders (Long)
Michal Muszynski, Jamie Zelazny, Jeffrey Girard, Louis-Philippe Morency
Multimodal, Multiparty Modeling of Collaborative Problem Solving Performance (Long)
Shree Krishna Subburaj, Angela Stewart, Arjun Ramesh Rao, Sidney D'Mello
BreathEasy: Assessing Respiratory Diseases Using Mobile Multimodal Sensors (Long)
Md Mahbubur Rahman, Mohsin Ahmed, Tousif Ahmed, Bashima Islam, Viswam Nathan, Korosh Vatanparvar, Ebrahim Nemati, Daniel McCaffrey, Jilong Kuang, Jun Alex Gao
MSP-Face Corpus: A Natural Audiovisual Emotional Database (Long)
Andrea Vidal, Ali Salman, Wei-Cheng Lin, Carlos Busso
A Phonology-based Approach for Isolated Sign Production Assessment in Sign Language (LBR)
Sandrine Tornay, Necati Cihan Camgoz, Richard Bowden, Mathew Magimai Doss
Speech Emotion Recognition among Elderly Individuals using Multimodal Fusion and Transfer Learning (LBR)
George Boateng, Tobias Kowatsch
The Influence of Blind Source Separation on Mixed Audio Speech and Music Emotion Recognition (LBR)
Casper Laugs, Hendrik Vincent Koops, Daan Odijk, Heysem Kaya, Anja Volk
Gender Classification of Prepubescent Children via Eye Movements with Reading Stimuli (LBR)
Sahar Mahdie Klim Al Zaidawi, Martin H.U. Prinzler, Christoph Schröder, Sebastian Maneth, Gabriel Zachmann
Neuroscience to Investigate Social Mechanisms Involved in Human-Robot Interactions (LBR)
Youssef Hmamouche, Magalie Ochs, Laurent Prévot, Thierry Chaminade
User Expectations and Preferences to How Social Robots Render Text Messages with Emojis (LBR)
Kerstin Fischer, Rosalyn M. Langedijk, Karen Fucinato
Speech Emotion Recognition among Couples using the Peak-End Rule and Transfer Learning (LBR)
George Boateng, Laura Sels, Peter Kuppens, Peter Hilpert, Tobias Kowatsch
A Novel Pseudo Viewpoint based Holoscopic 3D Micro-Gesture Recognition (LBR)
Yi Liu, Shuang Yang, Hongying Meng, Mohammad Rafiq Swash, Shiguang Shan
Not All Errors Are Created Equal: Exploring Human Responses to Robot Errors with Varying Severity (LBR)
Maia Stiber, Chien-Ming Huang
Using Physiological Cues to Determine Levels of Anxiety Experienced among Deaf and Hard of Hearing English Language Learners (LBR)
Heera Lee, Varun Mandalapu, Jiaqi Gong, Andrea Kleinsmith, Ravi Kuber
Physiological Synchrony, Stress and Communication of Paramedic Trainees During Emergency Response Training (LBR)
Vasundhara Misal, Surely Akiri, Sanaz Taherzadeh, Hannah McGowan, Gary Williams, J. Lee Lee Jenkins, Helena Mentis, Andrea Kleinsmith
|
17:45 - 18:45 |
Keynote 2 - ICMI Sustained Achievement Award:
Human-centered Multimodal Machine Intelligence
Prof. Dr. Shrikanth Narayanan
|
18:45 - 19:30 |
Breakout Discussions
Stay tuned for more info!
|
Wednesday, 28 October 2020
14:15 - 15:00 |
Drinks & Pub Quiz
Join us in the Drinks or Pub Quiz!
|
15:00 - 16:00 |
Keynote 3:
Sonic Interaction: From gesture to immersion
Prof. Dr. Atau Tanaka
|
16:00 - 16:45 |
Session 3a: Technical papers (Long, Short) and Grand Challenge papers (GC)
|
|
The eyes know it: FakeET- An Eye-tracking Database to Understand Deepfake Perception (Long)
Parul Gupta, Komal Chugh, Abhinav Dhall, Ramanathan Subramanian
Conventional and Non-conventional Job Interviewing Methods: A Comparative Study in Two Countries (Short)
Kumar Shubham, Emmanuelle Kleinlogel, Anaïs Butera, Marianne Schmid Mast, Dinesh Babu Jayagopi
Mimicker-in-the-Browser: A Novel Interaction Using Mimicry to Augment the Browsing Experience (Long)
Riku Arakawa, Hiromu Yakura
Force9: Force-assisted Miniature Keyboard on Smart Wearables (Long)
Lik Hang Lee, Ngo Yan Yeung, Tristan Braud, Tong Li, Xiang Su, Pan Hui
ROSMI: A Multimodal Corpus for Map-based Instruction-Giving (Short)
Miltiadis Marios Katsakioris, Ioannis Konstas, Pierre Mignotte, Helen Hastie
How Good is Good Enough?: The Impact of Errors in Single Person Action Classification on the Modeling of Group Interactions in Volleyball (Long)
Lian Beenhakker, Fahim Salim, Dees Postma, Robby van Delden, Dennis Reidsma, Bert-Jan van Beijnum
Facial Electromyography-based Adaptive Virtual Reality Gaming for Cognitive Training (Long)
Lorcan Reidy, Dennis Chan, Charles Nduka, Hatice Gunes
Bring the Environment to Life: A Sonification Module for People with Visual Impairments to Improve Situation Awareness (Long)
Angela Constantinescu, Karin Müller, Monica Haurilet, Vanessa Petrausch, Rainer Stiefelhagen
Studying Person-Specific Pointing and Gaze Behavior for Multimodal Referencing of Outside Objects from a Moving Vehicle (Long)
Amr Gomaa, Guillermo Reyes, Alexandra Alles, Lydia Rupp, Michael Feld
Multimodal Data Fusion based on the Global Workspace Theory (Long)
Cong Bao, Zafeirios Fountas, Temitayo Olugbade, Nadia Berthouze
A Comparison between Laboratory and Wearable Sensors in the Context of Physiological Synchrony (Short)
Jasper van Beers, Ivo Stuldreher, Nattapong Thammasan, Anne-Marie Brouwer
"Was that successful?" On Integrating Proactive Meta-Dialogue in a DIY-Assistant using Multimodal Cues (Long)
Matthias Kraus, Marvin Schiller, Gregor Behnke, Pascal Bercher, Michael Dorna, Michael Dambier, Birte Glimm, Susanne Biundo, Wolfgang Minker
Advanced Multi-Instance Learning Method with Multi-features Engineering and Conservative Optimization for Engagement Intensity Prediction (GC)
Jianming Wu, Bo Yang, Yanan Wang, Gen Hattori
Implicit Knowledge Injectable Cross Attention Audiovisual Model for Group Emotion Recognition (GC)
Yanan Wang, Jianming Wu, Panikos Heracleous, Shinya Wada, Rui Kimura, Satoshi Kurihara
A Multi-Modal Approach for Driver Gaze Prediction to Remove Identity Bias (GC)
Zehui Yu, Xiehe Huang, Xiubao Zhang, Haifeng Shen, Qun Li, Weihong Deng, Jian Tang, Yi Yang, Jieping Ye
Group-level Speech Emotion Recognition Utilising Deep Spectrum Features (GC)
Sandra Ottl, Shahin Amiriparian, Maurice Gerczuk, Vincent Karas, Björn Schuller
Multi-rate Attention Based GRU Model for Engagement Prediction (GC)
Bin Zhu, Xinjie Lan, Xin Guo, Kenneth Barner, Charles Boncelet
Fusical: Multimodal Fusion for Video Sentiment (GC)
Boyang Jin, Leila Abdelrahman, Cong Chen, Amil Khanzada
X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild (GC)
Lukas Stappen, Georgios Rizos, Björn Schuller
|
16:45 - 17:00 |
Break
|
17:00 - 17:45 |
Session 3b: Technical papers (Long, Short) and Grand Challenge papers (GC)
|
|
Predicting Video Affect via Induced Affection in the Wild (Long)
Yi Ding, Radha Kumaran, Tianjiao Yang, Tobias Höllerer
Influence of Electric Taste, Smell, Color, and Thermal Sensory Modalities on the Liking and Mediated Emotions of Virtual Flavor Perception (Long)
Nimesha Ranasinghe, Meetha Nesam James, Michael Gecawicz, Jonathan Roman Bland, David Smith
Punchline Detection using Context-Aware Hierarchical Multimodal Fusion (Short)
Akshat Choube, Mohammad Soleymani
Going with our Guts: Potentials of Wearable Electrogastrography (EGG) for Affect Detection (Long)
Angela Vujic, Stephanie Tong, Rosalind Picard, Pattie Maes
Eye-Tracking to Predict User Cognitive Abilities and Performance for User-Adaptive Narrative Visualizations (Long)
Oswald Barral, Sebastien Lalle, Grigorii Guz, Alireza Iranpour, Cristina Conati
Effects of Visual Locomotion and Tactile Stimuli Duration on the Emotional Dimensions of the Cutaneous Rabbit Illusion (Long)
Mounia Ziat, Katherine Chin, Roope Raisamo
Toward Adaptive Trust Calibration for Level 2 Driving Automation (Long)
Kumar Akash, Neera Jain, Teruhisa Misu
Incorporating Measures of Intermodal Coordination in Automated Analysis of Infant-Mother Interaction (Long)
Lauren Klein, Victor Ardulov, Yuhua Hu, Mohammad Soleymani, Alma Gharib, Barbara Thompson, Pat Levitt, Maja Matari?
Multimodal Assessment of Oral Presentations using HMMs (Short)
Everlyne Kimani, Prasanth Murali, Ameneh Shamekhi, Dhaval Parmar, Sumanth Munikoti, Timothy Bickmore
The Sensory Interactive Table: Exploring the Social Space of Eating (Short)
Roelof de Vries, Juliet Haarman, Emiel Harmsen, Dirk Heylen, Hermie Hermens
Touch Recognition with Attentive End-to-End Model (Short)
Wail El Bani, Mohamed Chetouani
Attention Sensing through Multimodal User Modeling in an Augmented Reality Guessing Game (Long)
Felix Putze, Dennis Küster, Timo Urban, Alexander Zastrow, Marvin Kampen
Group Level Audio-Video Emotion Recognition Using Hybrid Networks (GC)
Chuanhe Liu, Wenqiang Jiang, Minghao Wang, Tianhao Tang
Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach (GC)
Anastasia Petrova, Dominique Vaufreydaz, Philippe Dessus
Recognizing Emotion in the Wild using Multimodal Data (GC)
Shivam Srivastava, Saandeep Lakshminarayan, Saurabh Hinduja, Sk Rahatul Jannat, Hamza Elhamdadi, Shaun Canavan
Multi-modal Fusion Using Spatio-temporal and Static Features for Group Emotion Recognition (GC)
Mo Sun, Jian Li, Hui Feng, Wei Gou, Haifeng Shen, Jian Tang, Yi Yang, Jieping Ye
Extract the Gaze Multi-dimensional Information Analysis Driver Behavior (GC)
Kui Lyu, Minghao Wang, Liyu Meng
EmotiW 2020: Driver Gaze, Group Emotion, Student Engagement and Physiological Signal based Challenges (GC)
Abhinav Dhall, Garima Sharma, Roland Goecke, Tom Gedeon
|
17:45 - 18:45 |
Keynote 4:
Deep Learning for Joint Vision and Language Understanding
Prof. Dr. Kate Saenko
|
18:45 - 19:30 |
ICMI Town Hall Meeting, Awards & Closing
You do not want to miss this!
|
Thursday, 29 October 2020
Workshops and Tutorials
|
|
|
|