Workshops

2nd workshop on Media Analytics for Societal Trends: Closing the loop with impact and affect in human-media interactions
Multimodal Language Acquisition and Communication
NeuroManagement and Intelligent Computing Method on Multimodal Interaction




2nd workshop on Media Analytics for Societal Trends: Closing the loop with impact and affect in human-media interactions

Abstract

Multimedia is a rich field for studying interactions at different scales: how different modalities of the content (i.e., video, audio and language use) interact with each other to provide rich storytelling; how the content in the movies and videos convey message to the viewers, and how it influences the viewers’ perception; and how viewers and consumers interact with the media content: proximally (e.g., movie ratings, box office ratings), perceptually (affect, user reaction, perceived sentiment) and long-term (e.g., influence of movie on portrayal on certain stereotypes).
Media is said to also mirror the society we live in, but the reverse is often equally true. Research in different forms of media processing have been around for a while, but they have usually only focused on the low and mid-level tasks, such as indexing, summarization, without making an attempt to close the loop with any impact metrics on the audience. With the advent of big data processing, however, the field of computational media research is now gaining steam, and significant efforts are being put to study the affective content, societal impact and trends in media data. This 2nd workshop on Media Analytics for Societal Trends comes at a critical time when interest in human-centric media analytics is on the rise and we hope that this workshop will offer a timely dissemination of research updates to benefit the academic and industry researchers and professionals working in these fields.

Website

https://sail.usc.edu/~mica/mast

Organizers

  • Naveen Kumar: Disney Research
  • Chi-Chun (Jeremy) Lee, National Tsing Hua University
  • Ming Li, Duke Kunshan University
  • Tanaya Guha : University of Warwick
  • Shri Narayanan : University of Southern California
  • Krishna Somandepalli : University of Southern California




Multimodal Language Acquisition and Communication

Abstract

This workshop explores the multimodal aspects of communication in general, and language acquisition in particular, with emphasis on the important role of haptics in information transmission. The capability of the skin as a channel of communication has been demonstrated in studies of the natural methods of tactual communication used by individuals with severe hearing and/or visual impairments. Despite a plethora of new haptics technologies and products on the market for sensory substitution, an urgent need still exists for the development of general-purpose devices that can provide users with the opportunity to feel and comprehend speech, environmental sounds, or imagery through the skin. Recent research, using a phonemic-based approach to encoding speech through a 24-channel vibratory array worn on the forearm, has demonstrated that users can acquire vocabularies of 500 words following a relatively short period of training (on the order of tens of hours). This breakthrough provides the background and motivation for continued work on haptic displays that can exploit the information-bearing capacity of the tactile sense for persons with all levels of sensory capabilities. A number of remaining questions regarding the development and use of such novel tactile displays will be addressed by the invited keynote speakers. These include: How should speech and other information be encoded on the skin? What is the best way to facilitate learning? Do multimodal cues lead to better and faster learning outcomes? What lessons can we borrow from hearing and vision research? The workshop talks will provide a state-of-the-art overview of these topics assuming minimum knowledge on the part of the audience. The keynote speakers consist of women researchers who have collaborated in the past in different combinations, and who conduct research in more than one sensory modality. Their familiarity with each other’s work will ensure a coherent presentation on diverse topics.

Website

https://hapticwebsitepurdu.wixsite.com/icmi2019-speechws

Organizers

  • Hong Z. Tan, Purdue University
  • Charlotte M. Reed, Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT




NeuroManagement and Intelligent Computing Method on Multimodal Interaction

Abstract

This workshop aims to explore the study of NeuroManagement and novel intelligent computing method on multimodal interaction from the perspective of multidiscipline. In recent years, the research of multimodal interaction has made rapid progress due to the development of artificial intelligence and big data mining technology, as well as the new findings of human psychology and behaviors. Nevertheless, there are still some difficult issues that need to be further explored such as how to comprehensively deal with the multimodal data from different sensory channels including verbal and non-verbal information, and how to accurately understand the expressive meaning of multimodal interactions under complex social situations, etc. Those have become the leading edge and multidisciplinary cross research frontier, as well as the key barrier to be broken through in the development of multimodal interaction. The existing methods are mainly based on pattern recognition technique trough the training between human's subjective self reports and his/her external performances such as face expressions, postures, voices, and behaviors, etc. The above methods lack of the basis of systematical correlation mechanism of human’s different external performances, and so as to make the recognition accuracy and computation cost to be a bottleneck for practical application, especially in the human-machine interactions under complex social situations. NeuroManagement, an emerging interdiscipline to study human’s psychology, behaviors and management based on neural mechanism, provides new theory and methodology for exploring the neural activities and inner mechanism in the dynamic process of multimodal interactions, and may offer important prior knowledge for improving the recognition accuracy and computation efficiency of human’s external performances. The workshop will organize the experts from the fields of computer engineering, artificial intelligence, neuroscience, management science, linguistics, and psychology, to exchange their research work and explore the study of NeuroManagement on multimodal interaction as well as the novel intelligent computing method on that basis from the perspective of multidiscipline, which is expected to bring a wealth of new ideas and approaches for the development of multimodal interaction research, and will have broad impacts on the relevant disciplines.

Website

http://www.fdsm.fudan.edu.cn/aicmi/

Organizers

  • Prof. Weihui Dai, Fudan University, China





ICMI 2019 ACM International Conference on Multimodal Interaction. Copyright © 2018-2019