1- The Second International Workshop on Automated Assessment of Pain (AAP)
2- 2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)
3- Insights on Group & Team Dynamics
4- CATS2021: International Workshop on Corpora and Tools for Social Skills
Annotation
5- Workshop on modelling socio-emotional and cognitive processes from multimodal data in the
wild
6- 2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour
7- The 6th International Workshop on Affective Social Multimedia Computing (ASMMC 2021)
8- Workshop on Multimodal Affect and Aesthetic Experience
9- Empowering Interactive Robots by Learning Through Multimodal Feedback Channels
10- GENEA Workshop 2021: Generation and Evaluation of Non-verbal Behaviour for Embodied
Agents
11- Socially-Informed AI for Healthcare – Understanding and Generating Multimodal Nonverbal
Cues
The Second International Workshop on Automated Assessment of Pain (AAP)
Abstract
Pain typically is measured by patient self-report, but self-reported pain is difficult to interpret
and may be impaired or in some circumstances not possible to obtain. For instance, in patients
with restricted verbal abilities such as neonates, young children, and in patients with certain
neurological or psychiatric impairments (e.g., dementia). Additionally, the subjectively
experienced pain may be partly or even completely unrelated to the somatic pathology of tissue
damage and other disorders. Therefore, the standard self-assessment of pain does not always
allow for an objective and reliable assessment of the quality and intensity of pain. Given
individual differences among patients, their families, and healthcare providers, pain often is
poorly assessed, underestimated, and inadequately treated. To improve assessment of pain,
objective, valid, and efficient assessment of the onset, intensity, and pattern of occurrence of
pain is necessary. To address these needs, several efforts are being made in machine learning
and computer vision community for automatic and objective assessment of pain from video as a
powerful alternative to self-reported pain.
The workshop aims to bring together interdisciplinary researchers working in field of automatic
multimodal assessment of pain (using video, audio, and physiological signals). A key focus of the
workshop is the translation of laboratory work into clinical practice.
Website
http://www.aap-2021.net/
Organizers
- Zakia Hammal, The Robotics Institute, Carnegie Mellon University, USA.
- Steffen Walter, University Hospital Ulm, Germany
- Nadia Berthouze, University College London, UK
2nd Workshop on Social Affective Multimodal Interaction for Health (SAMIH)
Abstract
This workshop is looking for works describing how interactive, multimodal technology such as virtual agents can be
used in social skills training for measuring and training social-affective interactions. Sensing technology now
enables analyzing user’s behaviors and physiological signals (heart-rate, EEG, etc). Various signal processing and
machine learning methods can be used for such prediction tasks. Beyond sensing, it is also important to analyze
human behaviors and model and implement training methods (e.g. by virtual agents, social robots, relevant scenarios,
design appropriate and personalized feedback about social skills performance). Such social signal processing and
tools can be applied to measure and reduce social stress in everyday situations, including public speaking at
schools and workplaces. Target populations include depression, Social Anxiety Disorder (SAD), Schizophrenia, Autism
Spectrum Disorder (ASD), but also a much larger group of different social pathological phenomena.
Website
https://sites.google.com/view/samih2021/home
Organizers
- Hiroki Tanaka (Nara Institute of Science and Technology, Japan)
- Satoshi Nakamura (Nara Institute of Science and Technology, Japan)
- Martin (CNRS-LIMSI, France)
- Catherine Pelachaud (CNRS-ISIR, Sorbonne University, France)
Insights on Group and Team Dynamics
Abstract
To capture temporal group and team dynamics, both social and computer scientists are increasingly working with
annotated behavioral interaction data. Such data can provide the basis for developing novel research lines that
capture dynamic, often "messy" group phenomena and at the same time provide intriguing challenges for the automated
analysis of multimodal interaction. For example, what can the behavioral patterns of social signals in group
interactions tell us about complex, often difficult to grasp emergent group constructs such as conflict, cohesion,
cooperation, or team climate? Technological advances in social signal processing allow for novel ways of group
analysis to tackle these types of questions. At the same time, a growing number of group researchers with a
background in the social sciences are embracing more behavioral approaches to group phenomena. Facilitating dialogue
collaboration among these disciplines has the potential to spark synergies and radically innovate both research in
multimodal interaction research as well as group research.This workshop is part of a timeline of initiatives
starting from a 2016 Lorentz Workshop which aimed to bring group scholars and researchers in the social and
affective computing community together.
Website
http://geeksngroupies.ewi.tudelft.nl/icmi2021/
Organizers
- Hayley Hung (Delft University of Technology)
- Joann Keyton (North Carolina State University)
- Joe Allen (University of Utah)
- Giovanna Varni (LTCI, Télécom Paris, Institut polytechnique de Paris)
- Catharine Oertel (Delft University of Technology)
- Gabriel Murray ( University of the Fraser Valley and University of British Columbia)
CATS2021: International Workshop on Corpora And Tools for Social skills annotation
Abstract
This Workshop aims at stimulating multi-disciplinary discussions about the challenges related to corpus creation and
annotation for social skills behavior analysis. Contributions from computational, psychological and psychometrics
perspectives, as well as applications including platforms to share corpora and annotations, are welcomed.
The main challenges related to corpus creation include the choice of the best setup and sensors, finding a trade-off
between eliciting natural interactions, limiting invasiveness and collecting precise information. The second issue
in this context regards the process of annotation. The choice of the type of annotators (experts vs. nonexperts),
the type of annotations (automatic vs. manual, continue vs. discrete), the temporal segmentation (windowed vs.
holistic) is crucial for a correct measure of the phenomenon of interest and getting significant results.
The topics of CATS2021 will have a strong impact on researchers and stakeholders across different disciplines, such
as Computer Science, Social Signal Processing, Psychology, Statistics. Leveraging the opportunities offered by such
a multidisciplinary environment, the participants could enrich their perspective, strengthen their practices and
methodologies and draw together a research roadmap tackling the discussed challenges, which might be taken up in
future collaborations.
Website
https://sites.google.com/view/cats2021workshop/home
Organizers
- Beatrice Biancardi, LTCI, Télécom Paris, France (beatrice.biancardi@telecom-paris.fr)
- Eleonora Ceccaldi, University of Genoa, Italy
- Chloé Clavel, LTCI, Télécom Paris, France
- Mathieu Chollet, IMT Atlantique, France
- Tanvi Dinkar, LTCI, Télécom Paris, France
Workshop on modelling socio-emotional and cognitive processes from multimodal data in the wild
Abstract
Multimodal signal processing in HRI and HCI is increasingly entering into a more applied stage, reaching the point
of having systems providing engaging interaction experiences in everyday life contexts. These behaviors may have
been adequately understood and trained in one context, but they may perform rather poorly when deployed in the wild.
Multimodal signal processing is essential for the design of more intelligent, adaptive and even empathic
applications in the wild. However, important issues remain largely unresolved: Starting from low-level processing
and integration of noisy data streams, over theoretical pitfalls, up to increasingly pressing ethical questions
about what artificial systems and machine learning can and should do. In this workshop, we will provide a forum for
discussion of the state-of-the-art in modeling user states from multimodal signals in the wild. The aim is to focus
on human-robot adaptive systems with live feedback from body dynamics and physiological sensing. We look forward to
works that combine measures of socio-emotional engagement, mental effort, stress, and dynamics of bodily signals
with measures of cognitive load to develop more robust and predictive models.
Website
https://initrobots.ca/icmiws/
Organizers
- Dennis Küster (University of Bremen)
- Felix Putze (University of Bremen)
- David St-Onge (École de Technologie Supérieure)
- Pascal E. Fortin (McGill University)
- Nerea Urrestilla (École de Technologie Supérieure)
- Tanja Schultz (University of Bremen)
2nd ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour
Abstract
Child behaviour is a topic of wide scientific interest, among many different disciplines including social and behavioural sciences and artificial intelligence (AI). Yet, knowledge from these different disciplines is not integrated to its full potential, owing to among others the dissemination of knowledge in different outlets (journals, conferences) and different practices. In this workshop, we aim to connect these fields and fill the gaps between science and technology capabilities to address topics such as: using AI (e.g. audio, visual, neuroimaging, textual signal processing and machine learning) to better understand and model child behavioural and developmental processes, challenges and opportunities in large-scale child behaviour analysis, implementing explainable ML/AI on sensitive child data, etc. We also welcome contributions on new child-behaviour related multimodal corpora and preliminary experiments on them. This year, the keynote talks will be given by Alessandro Vinciarelli and Sibel Halfon.
Website
https://sites.google.com/view/wocbu/home
Organizers
- Heysem Kaya, Utrecht University, the Netherlands
- Roy Hessels, Utrecht University, the Netherlands
- Maryam Najafian, MIT, United States
- Sandra Hanekamp, University of Texas at Austin, Austin, US
- Saeid Safavi, University of Surrey, United Kingdom
The 6th International Workshop on Affective Social Multimedia Computing (ASMMC 2021)
Abstract
Affective social multimedia computing is an emergent research topic for both affective computing and multimedia
research communities. Social multimedia is fundamentally changing how we communicate, interact, and collaborate with
other people in our daily lives. Comparing with well-organized broadcast news and professionally made videos such as
commercials, TV shows, and movies, social multimedia media computing imposes great challenges to research
communities. Social multimedia contains much affective information. Effective extraction of affective information
from social multimedia can greatly help social multimedia computing (e.g., processing, index, retrieval, and
understanding). Although much progress have been made in traditional multimedia research on multimedia content
analysis, indexing, and retrieval based on subjective concepts such as emotion, aesthetics, and preference,
affective social multimedia computing is a new research area. The affective social multimedia computing aims to
proceed affective information from social multi-media. For massive and heterogeneous social media data, the research
requires multidisciplinary understanding of content and perceptual cues from social multimedia. From the multimedia
perspective, the research relies on the theoretical and technological findings in affective computing, machine
learning, pattern recognition, signal/multimedia processing, computer vision, speech processing, behavior and social
psychology. Affective analysis of social multimedia and interaction is attracting growing attention from industry
and businesses that provide social networking sites, content-sharing services, distribute and host the media, social
interaction with artificial agents. This workshop focuses on the analysis of affective signals in interaction
(multimodal analyses enabling artificial agents in Human-Machine Interaction, social Interaction with artificial
agents) and social multimedia (e.g., twitter, wechat, weibo, youtube, facebook, etc).
Website
http://asmmc.ubtrobot.com
Organizers
- Dong-Yan HUANG ( Shenzhen R&D Centre, UBTech Robotics Corp, P.R. of China)
- Björn SCHULLER ( EIHCW, University of Augsburg, Germany)
- Jianhua TAO (NLPR, Institute of Automation, Chinese Academy of Sciences)
- Lei XIE (SAIIP, School of Computer Science, Northwestern Polytechnical University, Xi’an, China)
- Jie YANG ( DIIS, National Science Foundation (NSF) of USA)
Workshop on Multimodal Affect and Aesthetic Experience
Abstract
The term “aesthetic experience” corresponds to the inner state of a person exposed to form and content of artistic
objects. Exploring certain aesthetic values of artistic objects, indoor and outdoor spaces, urban areas, and modern
interactive technology is essential to improve social behaviour, quality of life, and health of humans in the long
term.
Moreover, quantifying and interpreting the aesthetic experience of people in different contexts can contribute
towards (a) creating art and (b) better understanding people’s affective reactions to aesthetic stimuli. Focusing on
different types of artistic content, such as movies, music, urban art, ancient artwork, and modern interactive
technology, the goal of this workshop is to enhance the interdisciplinary collaboration among researchers coming
from the following domain: affective computing, aesthetics, human-robot/computer interaction, and digital
archaeology and art.
Website
https://sites.google.com/view/maae-2021/home
Organizers
- Michal Muszynski (University of Geneva)
- Leimin Tian (Monash University)
- Edgar Roman-Rangel (Instituto Tecnologico Autonomo de México)
- Theodoros Kostoulas (University of the Aegean)
- Theodora Chaspari (Texas A&M University)
- Panos Amelidis (Bournemouth University)
Empowering Interactive Robots by Learning Through
Multimodal Feedback Channels
Abstract
Robots provide the potential to assist humans both in the workplace and in everyday life. In
contrast to classical robotic applications, assistive robots will face a variety of different tasks,
making it essential for them to learn through direct interaction with users. While recent
advances in Machine Learning have facilitated learning from non-expert users, it is still an
open question of how to optimally incorporate the inherent multimodality of human feedback
into these algorithms. Additionally, there is yet to discover the full potential of combining
explicit and implicit human feedback channels. In this workshop, we will focus on how to best
incorporate multimodal human feedback into existing learning approaches and which
multimodal interactions are preferred and/or required for future robotic assistance. We provide
a forum for interdisciplinary exchange between researchers from disciplines such as HRI, HCI,
Affective Computing, natural language understanding, and machine learning to stimulate a
discussion on how multimodality can become key to the emerging field of interactive machine
learning and lead to more intuitive and successful interactions with robots in the future.
Website
https://sites.google.com/view/interactivemultimodallearning
Organizers
- Cigdem Turan (TU Darmstadt)
- Dorothea Koert (TU Darmstadt)
- Karl David Neergaard (University of Macau)
- Rudolf Lioutikov (University of Texas)
GENEA Workshop 2021: Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
Abstract
Embodied Social AI in the form of conversational virtual humans and social robots are becoming key aspects of
human-machine interaction. For several decades, researchers have been proposing methods and models to generate
non-verbal behaviour for conversational agents in the form of facial expressions, gestures, and gaze. The topic has
attracted the attention of different communities such as HCI, robotics, and graphics as well as social and
behavioural scientists. Yet, embodied agents are still far away from having non-verbal behaviours synthesised
on-the-fly in interactive settings and in an autonomous manner. In order to advance the field of non-verbal
behaviour generation, there needs to be clear methods for evaluating and benchmarking the outcomes and learning from
the experiences of different communities. This workshop aims at bringing together researchers that use different
methods for non-verbal-behaviour generation and evaluation, regardless of application area, and hopes to stimulate
the discussions on how to improve both the generation methods and the evaluation of the results.
Website
https://genea-workshop.github.io/2021/
Organizers
- Taras Kucherenko (KTH Royal Institute of Technology, Sweden)
- Zerrin Yumak (Utrecht University, The Netherlands)
- Gustav Eje Henter (KTH Royal Institute of Technology, Sweden)
- Pieter Wolfert (Ghent University, Belgium)
- Youngwoo Yoon (ETRI, South Korea)
- Patrik Jonell (KTH Royal Institute of Technology, Sweden)
Socially-Informed AI for Healthcare – Understanding and Generating Multimodal Nonverbal Cues
Abstract
Advances in the areas of face and gesture analysis, computational paralinguistics, multimodal interaction, and
human-computer interaction have all played a major role in shaping research into assistive technologies over the
last decade, resulting in a breadth of practical applications ranging from diagnosis and treatment tools to social
companion technologies. From an analytical perspective, nonverbal cues provide understandings into the assessment of
wellbeing (i.e., detecting depression, pain, etc.) and the detection of mental health, developmental and
neurological conditions such as autism, dementia, depression and schizophrenia. From both a synthesis and generative
perspective, it is necessary that assistive technologies, either disembodied or embodied, are capable of generating
engaging, interactive behaviours and interventions that are personalised and adapted to user’s needs, profiles and
preferences. While nonverbal cues play an essential role, there are still many key issues to overcome, which affect
both the development and the deployment of multimodal technologies in real-world settings. The key aim of this
multidisciplinary workshop is to foster such cross-pollination by bringing together computer scientists and social
psychologists to discuss innovative ideas, challenges and opportunities for understanding and generating multimodal
nonverbal cues within the scope of healthcare applications. Particularly, we intend to shed light on three broad
questions: 1) How to collect and analyse multimodal nonverbal cues moving from one-off settings to long-term
settings, from one-fits-all models to personalised models; 2) How to design technological devices and generate
appropriate system behaviours based on the inferred user’s states, needs, and profiles, which are clear to and
understandable by their users; and 3) How to use such technologies in assistive and healthcare applications,
entailing appropriate qualitative and quantitative evaluation methods.
Website
https://social-ai-for-healthcare.github.io
Organizers
- Oya Celiktutan (Department of Engineering, King’s College London, UK)
- Alexandra Georgescu (Department of Psychology, King’s College London, UK)
- Nicholas Cummins (Department of Biostatistics and Health Informatics, King’s College London, UK)