Blue Sky Presentations (3)
Optimized Human-A.I. Group Decision Making: A Personal View
Sandy Pentland
Towards Sonification in Multimodal and User-Friendly Explainable Artificial Intelligence
Georgios Rizos
Dependability and Safety: Two Clouds in the Blue Sky of Multimodal Interaction
Philippe Palanque
List of Accepted Oral Presentations (34)
New Analytic and Machine Learning Techniques
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Mr. Wei Han,
Exploiting the interplay between Social and Task dimensions of cohesion to predict its dynamics leveraging Social
Sciences
Lucien Maman
Dynamic Mode Decomposition with Control as a Model of Multimodal Behavioral Coordination
Lauren Klein
A Contrastive Learning Approach for Compositional Zero-Shot Learning
Muhammad Umer Anwaar
Efficient Deep Feature Calibration for Cross-Modal Joint Embedding Learning
Zhongwei Xie
Support for Health, Mental Health and Disability
Multimodal Feature Estimators of Temporal Phases of Anxiety
Hashini Senaratne
Inclusive action game presenting Real-time Multimodal Presentations for sighted and blind persons
Takahiro Miura
ViCA: Combining visual, social, and task-oriented conversational AI in a healthcare setting George Pantazopoulos
Sound to Visual/Haptic Feedback Design in VR for Deaf and Hard of Hearing Users
Dhruv Jain
Am I Allergic to This? Assisting Sight Impaired People in the Kitchen
Angus Addlesee
MindfulNest: Strengthening Emotion Regulation with Tangible User Interfaces
Samantha Speer
Conversation, Dialogue Systems and Language Analytics
A Systematic Cross-Corpus Analysis of Human Reactions to Conversational Robot Failures
Dimosthenis Kontogiorgos
Recognizing Perceived Interdependence in Conversations through Multimodal Analysis of Nonverbal Behavior
Bernd Dudzik
Modelling and Predicting Trust for Developing Proactive Dialogue Strategies in Mixed-Initiative Interaction
Matthias Kraus
Recognizing Social Signals with Weakly Supervised Multitask Learning for Multimodal dialogue Systems
Shogo Okada
Decision-Theoretic Question Generation for Situated Reference Resolution: An Empirical Study and Computational Model
Felix Gervits
Speech, Gesture and Haptics
Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation
Riku Arakawa
Hierarchical Classification and Transfer Learning to Recognize Head Gestures and Facial Expressions Using Earbuds
Shkurta Gashi
Integrated speech and gesture synthesis
Siyang Wang
Co-Verbal Touch: Enriching Video Telecommunications with Remote Touch Technology
Angela Chan
HapticLock: Eyes-Free Authentication for Mobile Devices
Euan Freeman
Haptic Feedback and Visual Cues for Teaching Forces: The Impact of Prior Knowledge
Tabitha C. Peck
Behavioral Analytics and Applications
Toddler-Guidance Learning: Impacts of Critical Period on Multimodal AI Agents
Junseok Park
Attachment Recognition in School Age Children Based on Automatic Analysis of Facial Expressions and Nonverbal Vocal
Behaviour
Alessandro Vinciarelli
Characterizing Children’s Motion Qualities: Implications for the Design of Motion Applications for Children
Aishat Aloba
Temporal Graph Convolutional Network for Multimodal Sentiment Analysis
Jian Huang
Conversational Group Detection with Graph Neural Networks
Sydney Thompson
Self-supervised Contrastive Learning of Multi-view Facial Expressions
Shuvendu Roy
Multimodal Ethics, Interfaces and Applications
What's Fair is Fair: Detecting and Mitigating Encoded Bias in Multimodal Models of Museum Visitor Attention
Halim D Acosta
Bias and Fairness in Multimodal Machine Learning: A Case Study of Automated Video Interviews
Brandon Booth
Technology as Infrastructure for Dehumanization: Three Hundred Million People with the Same Face
Sharon Oviatt
Investigating Trust in Human-AI Collaboration for Supporting Human-Centered AI-Enabled Decision Making
Abdullah Aman Tutul
Impact of the Size of Modules on Target Acquisition and Pursuit for Future Modular Shape-changing PUIs
Laura Pruszko
Why Do I Have to Take Over Control? Evaluating Safe Handovers with Advance Notice and Explanations in Highly
Automated Driving
Frederik Wiehr
List of Accepted Posters (66)
Speech and Language
What’s This? A Voice and Touch Multimodal Approach for Ambiguity Resolution in Voice Assistants
Sebastian Rodriguez
Audiovisual Speech Synthesis using Tacotron2
Ahmed Hussen
Cross Lingual Video and Text Retrieval: A New Benchmark Dataset and Algorithm
Jayaprakash Akula
Multimodal User Satisfaction Recognition for Non-task Oriented Dialogue Systems
Wenqing Wei
Deep Transfer Learning for Recognizing Functional Interactions via Head Movements in Multiparty Conversations
Kazuhiro Otsuka
To Rate or Not To Rate: Investigating Evaluation Methods for Generated Co-Speech Gestures
Pieter Wolfert
Towards Automatic Narrative Coherence Prediction
Hanan Salam
Semi-supervised Visual Feature Integration for Language Models through Sentence Visualization
Lisai Zhang
Engagement Rewarded Actor-Critic with Conservative Q-Learning for Speech-Driven Backchannel Generation in Human-Robot
Interaction
Öykü Zeynep Bayramoğlu
Knowing Where and What to Write in Automated Live Video Comments: A Unified Multi-Task Approach
Hao Wu
Toward Developing a Multimodal Multi-party Hindi Humorous Dataset for Humor Recognition in Conversations
Dushyant Singh Chauhan
ConAn: A Usable Tool for Multimodal Conversation Analysis
Anna Penzkofer
Gaze, Touch, Haptics and Multimodality
EyeMU Interactions: Gaze + IMU Gestures on Mobile Devices
Andy Kong
Looking for laughs: Gaze interaction with laughter pragmatics and coordination
Chiara Mazzocconi
Predicting Gaze from Egocentric Social Interaction Videos and IMU Data
CIGDEM BEYAN
Perception of Ultrasound Haptic Focal Point Motion
Euan Freeman
Enhancing Ultrasound Haptics with Parametric Audio Effects
Euan Freeman
Investigating the Effect of Polarity in Auditory and Vibrotactile Displays Under Cognitive Load
Jamie Ferguson
User Preferences for Calming Affective Haptic Stimuli in Social Settings
Shaun Alexander Macdonald
ThermEarhook: Investigating Spatial Thermal Haptic Feedback Around the Ear
Kening Zhu
Learning Oculomotor Behaviors from Scanpath
Beibin Li
Directed Gaze Triggers Higher Frequency in Gaze Change: An Automatic Analysis of Dyads in Unstructured Conversation
Georgiana Cristina Dobre
Gaze-based Multimodal Meaning Recovery for Noisy/Complex Environments
Ozge Alacam
Group-Level Focus of Visual Attention for Improved Active Speaker Detection
Christopher Birmingham
Multimodal Ethics, Interfaces, Techniques and Applications
An Interpretable Approach to Hateful Meme Detection
Tanvi Deshpande
Attention-based Multimodal Feature Fusion for Dance Motion Generation
Kosmas Kritsis
Predicting Worker Accuracy from Nonverbal Behaviour: Benefits and Potential for Algorithmic Bias
Yuushi Toyoda
Feature Perception in Broadband Sonar Analysis – Using the Repertory Grid to Elicit Interface Designs to Support
Human-Autonomy Teaming
Faye McCabe
Earthquake response drill simulator based on a 3-DOF motion base in augmented reality
Sang-Woo Seo
ML-PersRef: A Machine Learning-based Personalized Multimodal Fusion Approach for Referencing Outside Objects
From a
Moving Vehicle
Amr Gomaa
Multimodal Detection of Drivers Drowsiness and Distraction
Kapotaksha Das
Cross-modal Assisted Training for Abnormal Event Recognition in Elevators
Xinmeng Chen
Long-Term, In-the-Wild Study of Feedback About Speech Intelligibility for K-12 Students Attending Class via a
Telepresence Robot
Matthew Rueben
Interaction techniques for 3D-positioning objects in mobile augmented reality
Miroslav Bachinski
Online Study Reveals the Multimodal Effects of Discrete Auditory Cues in Moving Target Estimation Task
Katsutoshi Masai
On the Transition of Social Interaction from In-Person to Online: Predicting Changes in Social Media Usage of
College
Students during the COVID-19 Pandemic based on Pre-COVID-19 On-Campus Colocation
Weichen Wang
Design and Development of a Low-cost Device for Weight and Center of Gravity Simulation in Virtual
Reality
Hai-Ning Liang
Inclusive Voice Interaction Techniques for Creative Object Positioning
Farkhandah Aziz
Interaction Modalities for Notification Signals in Augmented Reality
May Jorella Lazaro
HEMVIP: Human Evaluation of Multiple Videos in Parallel
Patrik Jonell
TaxoVec: Taxonomy Based Representation for Web User Profiling
Qinpei Zhao
Inflation-Deflation Networks for Recognizing Head-Movement Functions in Face-to-Face Conversations
Kazuhiro Otsuka
Graph Capsule Aggregation for Unaligned Multimodal Sequences
Jianfeng Wu
PARA: Privacy Management and Control in Emerging IoT Ecosystems using Augmented Reality
Carlos Bermejo Fernandez
Multisensor-Pipeline: A Lightweight, Flexible, and Extensible Framework for Building Multimodal-Multisensor
Interfaces
Michael Barz
Knock &Tap: Classification and Localization of Knock and Tap Gestures using Deep Sound Transfer Learning
Detecting Face Touching with Dynamic Time Warping on Smartwatches: A Preliminary Study
Yu-Peng Chen
How Do HCI Researchers Describe Their Software Tools? Insights from a Synopsis Survey of Tools for Multimodal
Interaction
Radu-Daniel Vatavu
Health, Mental Health and Disability
Mass-deployable Smartphone-based Objective Hearing Screening with Otoacoustic Emissions
Samarjit Chakraborty
Advances in Multimodal Behavioral Analytics for Early Dementia Diagnosis: A Review
Chathurika Jayangani Palliya Guruge
An Automated Mutual Gaze Detection Framework for Social Behavior Assessment in Autism Therapy of Children
Zhang Guo
Approximating the Mental Lexicon from Clinical Interviews as a Support Tool for Depression Detection
Esaú VILLATORO TELLO
Tomato Dice: A Multimodal Device to Encourage Breaks During Work
Marissa A. Thompson
Speech Guided Disentangled Visual Representation Learning for Lip Reading
Ya Zhao
Knowledge- and Data-Driven Models of Multimodal Trajectories of Public Speaking Anxiety in Real and Virtual
Settings
Ehsanul Haque Nirjhar
Multimodal approach for Assessing Neuromotor Coordination in Schizophrenia Using Convolutional Neural
Networks
Yashish Maduwantha
Sensorimotor synchronization in blind musicians: does lack of vision influence non-verbal musical
communication?
Erica Volta
Behavior Analytics
States of confusion: Eye and Head tracking reveal surgeons' confusion during arthroscopic surgery
Benedikt Werner Hosp
Improving the Movement Synchrony Estimation with Action Quality Assessment in Children Play Therapy
Jicheng Li
Prediction of Interlocutor's Subjective Impressions based on Functional Head-Movement Features in Group
Meetings
Kazuhiro Otsuka
Head Matters: Explainable Human-centered Trait Prediction from Head Motion Dynamics
surbhi madan
Intra- and Inter-Contrastive Learning for Micro-expression Action Unit Detection
Yante Li
DynGeoNet: Fusion network for micro-expression spotting
Thuong-Khanh Tran
Personality Prediction with Cross-Modality Feature Projection
Daisuke Kamisaka
Improved Speech Emotion Recognition using Transfer Learning and Spectrogram Augmentation
Omid Sadjadi
Human-guided Modality Importance for Affective States
Torsten Wörtwein
List of Accepted DC Presentations (9)
Natural Language Stage of Change Modelling for “Motivationally-driven” Weight Loss Support
Meyer, Selina
Using Generative Adversarial Networks to Create Graphical User Interfaces for Video Games
Acornley, Christopher
Semi-Supervised Learning for Multimodal Speech and Emotion Recognition
Li, Yuanchao
Photogrammetry-based VR interactive pedagogical agent for K12 education
Dai, Laduona
What if I interrupt you
YANG, Liu
Development of an Interactive Human/Agent Loop using Multimodal Recurrent Neural Networks
Woo, Jieyeon
Assisted End-User Robot Programming
Ajaykumar, Gopika
Understanding Personalised Auditory-Visual Associations in Multi-modal Interactions
O'Toole, Patrick
Accessible applications - Study and design of user interfaces to support users with disabilities
Di Gregorio, Marianna