1- Action Modelling for Interaction and Analysis in Smart Sports and Physical Education
3- Bridging social sciences and AI for understanding child behaviour
4- International Workshop on Deep Video Understanding
5- Face and Gesture Analysis for Health Informatics
6- Insights on Group & Team Dynamics
8- Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild
9- Multisensory Approaches to Human-Food Interaction
10- Oral History and Technology
11- Multimodal Affect and Aesthetic Experience
12- Multimodal Interaction in Psychopathology
13- Multimodal e-Coaches
14- Social affective multimodal interaction for health
15- Multi-Timescale Sensitive Movement Technologies
Action Modelling for Interaction and Analysis in Smart Sports and Physical Education
Abstract
The workshop intends to bring together the research in Machine Learning, Interactive Technology and Sports & Movement Science to propose multimodal and interactive systems which can be utilized by trainers and players during their routine training sessions to analyse performance and provide real time interactive feedback. Performance in sports depends on training programs designed by team staff, with a regime of physical, technical, tactical and perceptual-cognitive exercises. Depending on how participants perform, exercises are adapted, or the program may be redesigned. State of the art data science methods have led to ground breaking changes. Data is collected from sources such as tracking position and motion of players in basketball, baseball & football match statistics and volleyball. It is not just limited to sports training; but can also be extended to any physical exercise based activity such as dance lessons, aerobics, yoga training etc. There are multiple examples of using both sensor and computer vision based approaches for automatic recognition of sports action in numerous sports e.g. soccer, tennis, hockey , basketball and rugby. However, sports trainers still rely on manual effort to collect and analyse events of interests related to intended learning focus. The workshop aims to fill the gap between the state of the art in technological development and the state of the art in sports and physical education. We aim to bring together researchers from these areas and solicit ideas regarding, how the recent technological advances can be applied in real life sports and PE scenarios in a multidisciplinary and user centric way and enhance the overall user experience with PE. We invite papers for frameworks, system ideas (architecture), user studies, data sets and models, which are or have the potential to be readily applied in real life training scenarios for sports or recreational physical activities.
Website
http://smart-sports-exercises.nl/maistrope/
Organizers
- Fahim A. Salim (BSS, University of Twente)
- Bert-Jan F. van Beijnum (BSS, University of Twente)
- Dennis Reidsma (HMI, University of Twente)
- Saturnino Luz (Usher Institute, University of Edinburgh)
- Maite Frutos-Pascual (Digital Media Technology, Birmingham City University)
- Fasih Haider (Usher Institute, University of Edinburgh)
Bridging social sciences and AI for understanding child behaviour
Abstract
Child behaviour is a topic of wide scientific interest, among many different disciplines including social and behavioural sciences and artificial intelligence (AI). Yet, knowledge from these different disciplines is not integrated to its full potential, owing to among others the dissemination of knowledge in different outlets (journals, conferences) and different practices. In this workshop, we aim to connect these fields and fill the gaps between science and technology capabilities to address topics such as: using AI (e.g. audio, visual, textual signal processing and machine learning) to better understand and model child behavioural and developmental processes, challenges and opportunities in large-scale child behaviour analysis, implementing explainable ML/AI on sensitive child data, etc. We also welcome contributions on new child-behaviour related multimodal corpora and preliminary experiments on them.
Website
https://sites.google.com/view/wocbu/home
Organizers
- Heysem Kaya (Utrecht University)
- Roy Hessels (Utrecht University)
- Maryam Najafian (MIT)
- Sandra Hanekamp (Harvard Medical School)
- Saeid Safavi (University of Surrey)
International Workshop on Deep Video Understanding
Abstract
Deep video understanding is a difficult task which requires systems to develop a deep analysis and understanding of the relationships between different entities in video, to use known information to reason about other, more hidden information, and to populate a knowledge graph (KG) with all acquired information. To work on this task, a system should take into consideration all available modalities (speech, image/video, and in some cases text). The aim of this workshop is to push the limits of multimodal extraction, fusion, and analysis techniques to address the problem of analyzing long duration videos holistically and extracting useful knowledge to utilize it in solving different types of queries. The target knowledge includes both visual and non-visual elements. As videos and multimedia data are getting more and more popular and usable by users in different domains, the research, approaches and techniques we aim to be applied in this workshop will be very relevant in the coming years and near future. The call for contributions in this workshop supports long, short and abstract papers related to multimedia understanding, in addition to an optional track for researchers who are interested to apply their techniques on a new pilot creative commons movie dataset (HLVU) collected by the organizers who will distribute the dataset, development data, testing queries and finally evaluate and score the submitted runs by participating researchers. In this optional challenge track, all participants will be invited to submit a paper describing their approaches to solve the testing queries.
Website
https://sites.google.com/view/dvu2020-workshop/home
Organizers
- Keith Curtis (National Institute of Standards and Technology)
- George Awad (Georgetown University & National Institute of Standards and Technology)
- Shahzad Rajput (Georgetown University & National Institute of Standards and Technology)
- Ian Soboroff (National Institute of Standards and Technology)
Face and Gesture Analysis for Health Informatics
Abstract
There is an ever-growing research interests of the computer vision and machine learning community in modeling human facial and gestural behavior for clinical applications. However, the current state of the art in computer vision and machine learning for face and gesture analysis has not yet achieved the goal of reliable use of behavioral indicators in clinical context. One main challenge to achieve this goal is the lack of available archives of behavioral observations of individuals that have clinically relevant conditions (e.g., pain, depression, autism spectrum). Well-labeled recordings of clinically relevant conditions are necessary to train classifiers. Interdisciplinary efforts to address this necessity are needed. The workshop aims to discuss the strengths and major challenges in using computer vision and machine learning of automatic face and gesture analysis for clinical research and healthcare applications. We invite scientists working in related areas of computer vision and machine learning for face and gesture analysis, affective computing, human behavior sensing, and cognitive behavior to share their expertise and achievements in the emerging field of computer vision and machine learning based face and gesture analysis for health informatics.
Website
http://fgahi.isir.upmc.fr/
Organizers
- Zakia Hammal (Carnegie Mellon University)
- Di Huang (Beihang University)
- Liming Chen (Ecole Centrale De Lyon)
- Mohamed Daoudi (IMT Lille Douai)
- Kévin Bailly (Sorbonne University)
Insights on Group & Team Dynamics
Abstract
To capture temporal group and team dynamics, both social and computer scientists are increasingly working with annotated behavioral interaction data. Such data can provide the basis for developing novel research lines that capture dynamic, often "messy" group phenomena and at the same time provide intriguing challenges for the automated analysis of multimodal interaction. For example, what can the behavioral patterns of social signals in group interactions tell us about complex, often difficult to grasp emergent group constructs such as conflict, cohesion, cooperation, or team climate? Technological advances in social signal processing allow for novel ways of group analysis to tackle these types of questions. At the same time, a growing number of group researchers with a background in the social sciences are embracing more behavioral approaches to group phenomena. Facilitating dialogue collaboration among these disciplines has the potential to spark synergies and radically innovate both research in multimodal interaction research as well as group research.This workshop is part of a timeline of initiatives starting from a 2016 Lorentz Workshop which aimed to bring group scholars and researchers in the social and affective computing community together.
Website
http://geeksngroupies.ewi.tudelft.nl/icmi2020/
Organizers
- Hayley Hung (Delft University of Technology)
- Nale Lehmann Willenbrock (University of Hamburg)
- Giovanna Varni (LTCI, Télécom Paris, Institut polytechnique de Paris)
- Fabiola H. Gerpott (WHU - Otto Beisheim School of Management)
- Catharine Oertel (Delft University of Technology)
- Gabriel Murray ( University of the Fraser Valley and University of British Columbia)
Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild
Abstract
Multimodal signal processing in HRI and HCI is increasingly entering into a more applied stage. Many systems now aim to provide an engaging interaction experience in contexts of everyday life. At the same time, this means that the variability and complexity of the involved social contexts has dramatically increased. Data-driven system behaviors that that may have been adequately understood, labeled, and trained in one context may perform rather poorly when deployed in the wild. But what are the major roadblocks for multimodal signal processing in the wild – and how can they be overcome? A central aim of this workshop is to engage in discussion about novel approaches and lessons learned in modeling multimodal data. Beyond this, we also need to do more to anticipate also future challenges in both data modelling and interaction design. Ethical challenges and treatment of highly variable availability of data provided or donated by users are just one example where an exchange on the state of art in multimodal data processing in the wild appears to be urgently needed.
Website
https://sites.google.com/view/modeling-multimodal-data/
Organizers
- Dennis Küster (University of Bremen)
- Felix Putze (University of Bremen)
- Patrícia Alves-Oliveira (University of Washington)
- Maike Paetzel (Uppsala University)
- Tanja Schultz (University of Bremen)
Multisensory Approaches to Human-Food Interaction
Abstract
Building on the successful development of the first, second, and third workshops on Multisensory Approaches to Human-Food Interaction (Asia: 18th ICMI Tokyo, November 2016; Europe: 19th ICMI Glasgow, November 2017; North America: 20th ICMI Boulder), as well as the growing interest in the field, we now present the 4th version of this workshop. There is a growing interest in the context of Human-Food Interaction to capitalize on multisensory interactions in order to enhance our food- and drink-related experiences. This, perhaps, should not come as a surprise, given that flavour, for example, is the product of the integration of, at least, gustatory and (retronasal) olfactory, and can be influenced by all our senses. Variables such as food/drink colour, shape, texture, sound, and so on can all influence our perception and enjoyment of our eating and drinking experiences, something that new technologies can capitalize on in order to “hack” food experiences. In this workshop, we are calling for investigations and applications of systems that create new, or enhance already existing, eating and drinking experiences (‘hacking’ food experiences) in the context of Human-Food Interaction. Moreover, we are interested in those works that are based on the principles that govern the systematic connections that exist between the senses. Human-Food Interaction also involves the experiencing of food interactions digitally in remote locations. Therefore, we are also interested in sensing and actuation interfaces, new communication mediums, and persisting and retrieving technologies for human food interactions. Enhancing social interactions to augment the eating experience is another issue we would like to see addressed in this workshop.
Website
http://multisensoryhfi.wordpress.com/
Organizers
- Carlos Velasco (BI Norwegian Business School)
- Anton Nijholt (University of Twente)
- Charles Spence (University of Oxford)
- Takuji Narumi (University of Tokyo)
- Kosuke Motoki (Miyagi University)
- Gijs Huisman (Amsterdam University of Applied Sciences)
- Marianna Obrist (University of Sussex)
Oral History and Technology
Abstract
As increasingly sophisticated new technologies for working with numbers, text, sound and images come on stream, there is one type of data that begs to be explored by the wide array of available digital humanities tools, and that is interview data (De Jong, 2014; Corti and Fielding, 2016). A stream of concerted activities the proposers of this workshop based on their engagement with interview data from different perspectives, led to a series of 4 workshops from 2016 to 2018, held in Oxford, Utrecht, Arezzo and München, in which we invited different types of scholars to explore interview data with an array of digital tools. This activities were funded by CLARIN (Common Language Resources and Technology Infrastructure. During the workshop we intend to first present the complex design and results of these series of multidisciplinary and multimodal workshops. The premise here is that the multimodal character (text, sound and facial expression) and multidisciplinary potential of interview data (history, oral and written language, audio-visual communication) is rarely fully exploited, as most scholars focus on the analysis of the textual representation of the interviews. This might change by getting acquainted with scholarly approaches and conventions from other disciplines. The second part of the workshop would offer scholars who work with interview data and tools, the opportunity to present their work.
Website
https://oralhistory.eu/workshops/icmi-2020
Organizers
- Arjan van Hessen (University of Twente)
- Stefania Scagliola (C2DH, University of Luxemburg)
- Louise Corti (UK Data Archive)
- Silvia Calamai (Universita di Siena)
- Norah Karrouche (Erasmus Universiteit Rotterdam)
- Christoph Draxler (Ludwig-Maximilians-Universität München)
- Henk van den Heuvel (Radboud Universiteit Nijmegen)
- Jeannine Beeken (University of Essex)
Multimodal Affect and Aesthetic Experience
Abstract
The term “aesthetic experience” corresponds to the inner state of a person exposed to form and content of artistic objects. Exploring certain aesthetic values of artistic objects, as well as interpreting the aesthetic experience of people when exposed to art can contribute towards understanding (a) art and (b) people’s affective reactions to artwork. Focusing on different types of artistic content, such as movies, music, urban art and other artwork, the goal of this workshop is to enhance the interdisciplinary collaboration between affective computing and aesthetics researchers.
Website
https://sites.google.com/view/maae2020
Organizers
- Theodoros Kostoulas (Bournemouth University)
- Michal Muszynski (University of Geneva and Carnegie Mellon University)
- Theodora Chaspari (Texas A&M University)
- Panos Amelidis (Bournemouth University)
Multimodal Interaction in Psychopathology
Abstract
Millions of people worldwide are affected by mental disorders that span depression, bipolar disorder, obsessive-compulsive disorder, schizophrenia, and dementia. Reliable assessment, monitoring, and evaluation are important to identify individuals in need of treatment, evaluate treatment response, and achieve remission or moderate impact. Many indicators of presence or severity of mental disorders are observable. Indicators include psychomotor agitation (inability to sit still, pacing, hand wringing) or retardation (slowed speech and body movements, speech that is decreased in volume or vocal quality), changes in facial expression, gaze, body movements, and cognition. Attempts at diagnosis, screening and evaluation of treatment response from behavioral indicators have focused primarily on the individual alone and individual modalities. Yet, disorders strongly impact social interaction and relationships in family members, work settings, and on social media and are multimodal as well as interpersonal. For these reasons, it is critical to use multimodal indicators in a variety of interpersonal contexts. The proposed Multimodal Interaction in Psychopathology workshop aims to bring together computer scientists, psychologists, behavioral scientists, neuroscientists, and clinicians with a focus on multimodal interaction in psychopathology. This workshop will provide an opportunity to present recent advancements for diagnosis and treatment of mental disorders, to share knowledge, and generate interdisciplinary networking and collaborations.
Website
https://sites.google.com/view/mi-psychopathology
Organizers
- Itir Onal Ertugrul (Carnegie Mellon University)
- Jeffrey Cohn (University of Pittsburgh)
- Hamdi Dibeklioglu (Bilkent University)
Multimodal e-Coaches
Abstract
e-Coaches are promising intelligent systems that aims at supporting human everyday life, dispatching advices through different interfaces, such as apps, conversational interfaces and augmented reality interfaces. The conjunction of e-Coaches and the IoT enables new interaction perspectives for the users. Indeed, coaching users across a plethora of interfaces and interaction modalities open new interaction scenarios and new challenges for the HCI community. Typical questions that arise are the following. How to maintain the e-coach identity and coherence across different interfaces? How to sustain trust and empathy across different interfaces? How to intelligently dispatch advices across interfaces that are distributed in spaces? Which modalities should be used in the different life settings? Can multimodal interaction increase the accessibility of the e-coach? How to make e-Coaches as unobtrusive as possible? How to maintain adherence to e-coaching programmes? The first workshop on multimodal e-Coaches aims at being a venue for discussing these questions and other HCI challenges that arise with e-Coaches distributed across multiple interfaces and devices.
Website
https://multimodal-ecoches.nestore-coach.eu/
Organizers
- Leonardo Angelini (University of Applied Sciences Western Switzerland)
- Mira El Kamali (University of Applied Sciences Western Switzerland)
- Elena Mugellini (University of Applied Sciences Western Switzerland)
- Omar Abou Khaled (University of Applied Sciences Western Switzerland)
- Yordan Dimitrov (Balkan Institute for Labour and Social Policy)
- Vera Veleva (Balkan Institute for Labour and Social Policy)
- Zlatka Gospodinova (Balkan Institute for Labour and Social Policy)
- Nadejda Miteva (Balkan Institute for Labour and Social Policy)
- Richar Wheeler (Balkan Institute for Labour and Social Policy)
- Panagiotis Bamidis (Aristotle University of Thessaloniki)
- Evdokimos Konstantinidis (Aristotle University of Thessaloniki)
- Despoina Petsani (Aristotle University of Thessaloniki)
- Andoni Beristain Iraola (Vicomtech Foundation)
- Gérard Chollet (Intelligent Voice)
- Inés Torres (University of the Basque Country)
- Zoraida Callejas (University of Granada)
- David Griol (University of Granada)
- Kawtar Benghazi (University of Granada)
- Manuel Noguera (University of Granada)
- Anna Esposito (University of Campania “Luigi Vanvitelli”)
- Dimitrios I. Fotiadis (University of Ioannina)
Social affective multimodal interaction for health
Abstract
This workshop is looking for works describing how interactive, multimodal technology such as virtual agents can be used in social skills training for measuring and training social-affective interactions. Sensing technology now enables analyzing user’s behaviors and physiological signals (heart-rate, EEG, etc). Various signal processing and machine learning methods can be used for such prediction tasks. Beyond sensing, it is also important to analyze human behaviors and model and implement training methods (e.g. by virtual agents, social robots, relevant scenarios, design appropriate and personalized feedback about social skills performance). Such social signal processing and tools can be applied to measure and reduce social stress in everyday situations, including public speaking at schools and workplaces. Target populations include depression, Social Anxiety Disorder (SAD), Schizophrenia, Autism Spectrum Disorder (ASD), but also a much larger group of different social pathological phenomena.
Website
https://sites.google.com/view/wsamih/
Organizers
- Hiroki Tanaka (Nara Institute of Science and Technology, Japan)
- Satoshi Nakamura (Nara Institute of Science and Technology, Japan)
- Jean-Claude Martin (CNRS-LIMSI, France)
- Catherine Pelachaud (CNRS-ISIR, Sorbonne University, France)
Multi-Timescale Sensitive Movement Technologies
Abstract
Reckoning with mutually interactive time scales characterizing human behavior is now a major multidisciplinary challenge for qualitative analysis of human movement, entrainment and prediction. Innovative scientifically-grounded and time-adaptive technologies operating at multiple time scales in a multi-layered approach are a promising direction for multimodal interfaces. This workshop aims at stimulating submissions on novel computational models and systems for the automated detection, measurement, and prediction of movement qualities from behavioural signals, based on multi-layer parallel processes at non-linearly stratified temporal dimensions. Furthermore, the workshop invites contributions towards novel technologies for human movement analysis, going beyond the well-known motion capture paradigm. Future motion capture and movement analysis systems will be endowed with a completely new functionality, achieving a novel generation of time-aware multisensory motion perception and prediction systems. Contributions from computational models, multimodal systems, experiments on the above mentioned core topics, as well as application scenarios, including e.g., healing, therapy and rehabilitation, entertainment, performing arts (music, dance) and active experience of multimedia cultural content, are welcome. The workshop is partially supported by the EU-H2020-FET Proactive Project GA824160 EnTimeMent (https://entimement.dibris.unige.it/). For this, the workshop will build on the results of the first year of EnTimeMent, while also being open to new perspectives and experiences.
Website
http://www.casapaganini.org/workshop2020/index.html
Organizers
- Antonio Camurri (Casa Paganini - InfoMus DIBRIS – University of Genoa, Italy)
- Eleonora Ceccaldi (Casa Paganini - InfoMus DIBRIS – University of Genoa, Italy)
- Gualtiero Volpe (Casa Paganini - InfoMus DIBRIS – University of Genoa, Italy)
- Benoit Bardy (Euromov, University of Montpellier)
- Nadia Bianchi-Berthouze (UCL Interaction Centre, University College London, United Kingdom)
- Luciano Fadiga (CTNSC, Fondazione Istituto Italiano di Tecnologia)
- Mårten Björkman (Royal Institute of Technology KTH Sweden)