Multi-sensorial Approaches to Human-Food Interaction (MHFI)
Group Interaction Frontiers in Technology (GIFT)
Modeling Cognitive Processes from Multimodal Data (MCPMD)
Human-Habitat for Health (H3)
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction (MA3HMI)
Cognitive Architectures for Situated Multimodal Human Robot Language Interaction
Abstract
There is a growing interest in the context of Human-Food Interaction to capitalize on multisensory interactions in order to enhance our food- and drink- related experiences. This, perhaps, should not come as a surprise, given that flavour, for example, is the product of the integration of, at least, gustatory and (retronasal) olfactory, and can be influenced by all our senses. Variables such as food/drink colour, shape, texture, sound, and so on can all influence our perception and enjoyment of our eating and drinking experiences, something that new technologies can capitalize on in order to “hack” food experiences.
In this 3rd workshop on Multi-Sensorial Approaches to Human-Food Interaction, we again are calling for investigations and applications of systems that create new, or enhance already existing, eating and drinking experiences (‘hacking’ food experiences) in the context of Human-Food Interaction. Moreover, we are interested in those works that are based on the principles that govern the systematic connections that exist between the senses. Human Food Interaction also involves the experiencing food interactions digitally in remote locations. Therefore, we are also interested in sensing and actuation interfaces, new communication mediums, and persisting and retrieving technologies for human food interactions. Enhancing social interactions to augment the eating experience is another issue we would like to see addressed in this workshop.
Website
https://multisensoryhfi.wordpress.com/
Organizers
- Carlos Velasco
- Anton Nijholt
- Marianna Obrist
- Katsunori Okajima
- Charles Spence
Abstract
The Group Interaction Frontiers in Technology (GIFT) workshop aims to bring together researchers from diverse fields related to group interaction, team dynamics, people analytics, multi-modal speech and language processing, social psychology, and organizational behaviour. The workshop will provide a unique opportunity to researchers to share their knowledge and gain insights outside their respective fields and will hopefully lead to inter-disciplinary networking and fruitful collaboration.
Website
https://sites.google.com/view/gift18workshop
Organizers
Proceedings
https://dl.acm.org/citation.cfm?id=3279981
Abstract
Multimodal signals allow us to gain insights about internal cognitive processes of a person, for example: Speech and gesture analysis yield cues about hesitations, knowledgeability, or alertness, eye tracking yields information about a person's focus of attention, task, or cognitive state, EEG yields information about a person's cognitive load or information appraisal. Capturing cognitive processes is an important research tool to understand human behavior as well as a crucial part of a user model to an adaptive interactive system such as a robot or a tutoring system. As cognitive processes are often multifaceted, a comprehensive model requires the combination of multiple complementary signals.
Website
https://www.uni-bremen.de/csl/icmi-2018-mcpmd.html
Organizers
- Felix Putze, University of Bremen
- Jutta Hild, Fraunhofer IOSB
- Enkelejda Kasneci, University of Tübingen
- Akane Sano, MIT Media Lab/Cornell University
- Erin Solovey, Drexel University
- Tanja Schultz, University of Bremen
Abstract
In the Internet of Things (IoT) era, digital human interaction with the habitat environment can be perceived as the continuous interconnection and exchange of cognitive, social, and affective signals between an individual or a group, and any type of environment built for humans (e.g., home, work, clinic). Through the integration of various interconnected devices (e.g., built-in microphones of home devices, acceleration, GPS, and physiological sensors embedded in smartphones or wearable devices, proximity sensors installed in smart objects), we can collect multimodal data including speech, spoken content, physiological, psychophysiological, and environmental signals, that enable the sensing of a person’s activity, mood, emotions, preferences, and/or health state, and ultimately provide appropriate feedback. Applications of these include artificial conversational agents (e.g., Amazon Alexa, Google Home) that enable voice powered human computer interaction to provide new information (e.g., nutritional food content, weather forecast) or conduct procedural tasks (e.g., update daily food intake diary, book a flight), in-the-moment automatic habitat adaptation systems that provide comfort and relaxation, human health and well-being support systems that are able to track the progress of a disease (e.g., depression tracking through linguistic and acoustic markers), detect high-risk episodes (e.g., suicidal tendencies), and ultimately provide feedback (e.g., guide individuals through a brief intervention) or take appropriate action (e.g., call 911). Special focus will be given on the technical considerations and challenges involved in these tasks ranging from the nature of the acquired data (e.g., noise, lack of structure, issues of multi-sensory integration) to the high variability present in habitat environments (e.g., different lighting conditions, room acoustic characteristics), and the inherent unpredictability and multi-faceted nature of human behavior. The H3 workshop aims to bring together experts from academia and industry spanning a set of multi-disciplinary fields, including computer science, speech and spoken language understanding, construction science, life-sciences, health sciences, and psychology, to discuss their respective views of the problem and identify synergistic and converging solutions.
Website
http://h3-icmi2018.cse.tamu.edu/
Organizers
- Theodora Chaspari, Assistant Professor, Computer Science & Engineering, Texas A&M University (chaspari@tamu.edu)
- Angeliki Metallinou, Senior Speech Scientist, Amazon Alexa Machine Learning (ametalli@amazon.com)
- Leah Stein Duker, Assistant Professor of Research, Occupational Science and Therapy, University of Southern California (lstein@chan.usc.edu)
- Amir Behzadan, Associate Professor, Construction Science, Texas A&M University (abehzadan@tamu.edu)
Abstract
One of the aims in building multimodal user interfaces and combining them with technical devices is to make the interaction between user and system as natural as possible in a situation as natural as possible. The most natural form of interaction can be considered how we interact with other humans. Although technology is still far from being human-like, and systems can reflect a wide range of technical solutions. They are often represented as artificial agents to facilitate smooth interactions. While the analysis of human-human communication has resulted in many insights, transferring these to human-machine interactions remains challenging especially if multiple possible interlocutors are present in a certain area. This situation requires that multimodal inputs from the main speaker (e.g., speech, gaze, facial expressions) as well as possible co-speaker are recorded and interpreted. This interpretation has to occur at both the semantic and affective levels, including aspects such as the personality, mood, or intentions of the user, anticipating the counterpart. These processes have to be ideally performed in real-time in order for the system to respond without delays, in a natural environment. Therefore, the MA3HMI workshop aims at bringing together researchers working on the analysis of multimodal data as a means to develop technical devices that can interact with humans. In particular, artificial agents can be regarded in their broadest sense, including virtual chat agents, empathic speech interfaces and life-style coaches on a smart-phone. We focus on the environment and situation an interaction is situated in extending the investigations on real-time aspects of human-machine interaction. We address the synergy of situation, context, and interaction history in the development and evaluation of multimodal, real-time systems.
Website
http://ma3hmi.cogsy.de/
Organizers
- Ronald Böck - Otto von Guericke University Magdeburg, Germany
- Francesca Bonin - IBM Research, Ireland
- Nick Campbell - Trinity College Dublin, Ireland
- Ronald Poppe - Utrecht University, The Netherlands
Abstract
In many application fields of human robot interaction, robots need to adapt to changing contexts and thus be able to learn tasks from non-expert humans through verbal and non-verbal interaction. Inspired by human cognition, we are interested in various aspects of learning, including multimodal representations, mechanisms for the acquisition of concepts (words, objects, actions), memory structures etc., up to full models of socially guided, situated, multimodal language interaction. These models can then be used to test theories of human situated multimodal interaction, as well as to inform computational models in this area of research.
In the Workshop on Cognitive Architectures for Situated Multimodal Human Robot Language Interaction, we focus on robot action and object learning from multimodal-interaction with a human tutor. Inspired by human cognition, the research interests of this workshop tackle different aspects of robot learning, such as (i) the kind of data used to develop socially guided models of language acquisition, (ii) the collection and preprocessing of empirical data to develop cognitively inspired models of language acquisition, (iii) the multimodal complexity of human interaction, (iv) multimodal models of language learning, and (v) adequate machine learning approaches to handle these high dimensional data.
The workshop aims at bringing together linguists, computer scientists, cognitive scientists, and psychologists with a particular focus on embodied models of situated natural language interaction and the challenges will be discussed under a multidisciplinary perspective.
Website
http://ralli.ofai.at/workshop.html
Organizers
- Stephanie Gross, Austrian Research Institute for Artificial Intelligence, Vienna, Austria
- Brigitte Krenn, Austrian Research Institute for Artificial Intelligence, Vienna, Austria
- Matthias Scheutz, Department of Computer Science at Tufts University, Massachusetts, USA
- Matthias Hirschmanner, Automation and Control Institute at Vienna University of Technology, Vienna, Austria