Keynote Speakers

James W. Pennebaker

Prof. James W. Pennebaker

The University of Texas at Austin, USA

Title : Understanding people by tracking their word use

Abstract: The words people use in their conversations, emails, and diaries can tell us how they think, approach problems, connect with others, and their behaviors. Of particular interest are people's use of function words -- pronouns, articles, and other small and forgettable words. Processed in the brain differently from content words, function words reveal where people are paying attention and how they think about themselves and others. After summarizing dozens of studies on language and psychological state, the talk will explore how text analysis can help us get inside the heads of the people we study.

Bio: James W. Pennebaker is the Regents Centennial Professor in social and personality psychology at the University of Texas at Austin. Over the last decade, he and his team have developed the computer program LIWC and a wide range of algorithms that link the use of function words to personality, social behaviors, and motivations. This work has been based on both laboratory studies and very large samples from social media ? including blogs, Twitter, Facebook, and search terms. He is also known for having developed the expressive writing method whereby people who are asked to write about emotional upheavals demonstrate improvements in physical and mental health. Author of over 300 research articles and 8 books, Pennebaker is among the most highly social scientists in the world. He has been continuously funded by the National Science Foundation, National Institutes of Health, and the DOD since 1983.

Susumu Tachi

Prof. Susumu Tachi

The University of Tokyo, Japan

Title : Embodied Media: Expanding Human Capacity via Virtual Reality and Telexistence

Abstract:The information we acquire in real life gives us a holistic experience that fully incorporates a variety of sensations and bodily motions such as seeing, hearing, speaking, touching, smelling, tasting, and moving. However, the sensory modalities that can be transmitted in our information space are usually limited to visual and auditory ones. Haptic information is rarely used in the information space in our daily lives except in the case of warnings or alerts such as cellphone vibrations.

Embodied media such as virtual reality and telexistence should provide holistic sensations, i.e., integrating visual, auditory, haptic, palatal, olfactory, and kinesthetic sensations, such that human users feel that they are present in a computer-generated virtual information space or a remote space having an alternate presence in the environment. Haptics plays an important role in embodied media because it provides both proprioception and cutaneous sensations; it lets users feel like they are touching distant people and objects and also lets them “touch” artificial objects as they see them.

In this keynote, an embodied media, which extends human experiences, is overviewed and our research on an embodied media that is both visible and tangible based on our proposed theory of haptic primary colors is introduced. The embodied media would enable telecommunication, tele-experience, and pseudo-experience providing sensations such that the user would feel like working in a natural environment. It would also enable humans to engage in creative activities such as design and creation as though they were in the real environment.

We have succeeded in transmitting fine haptic sensations, such as material texture and temperature, from an avatar robot’s fingers to a human user’s fingers. The avatar robot is a telexistence anthropomorphic robot, called TELESAR V, with a body and limbs with 53 degrees of freedom. This robot can transmit not only visual and auditory sensations of presence to human users but also realistic haptic sensations. Our other inventions include RePro3D, which is a full-parallax autostereoscopic 3D (three-dimensional) display with haptic feedback using RPT (retroreflective projection technology); TECHTILE Toolkit, which is a prototyping tool for the design and improvement of haptic media; and HaptoMIRAGE, which is an 180°-field-of-view autostereoscopic 3D display using ARIA (active-shuttered real image autostereoscopy) that can be used by three users simultaneously.

Bio: Susumu Tachi is currently Professor Emeritus of The University of Tokyo, Japan, and is leading several research projects on telexistence, virtual reality and haptics including JST ACCEL Embodied Media Project at Tachi Laboratory of Institute of Gerontology, The University of Tokyo. He is the 46th President of the Society of Instrument and Control Engineers (SICE), a Founding Director of the Robotic Society of Japan (RSJ), and the Founding President of the Virtual Reality Society of Japan (VRSJ).

Dr. Tachi received his B.E. and Ph.D. degrees from The University of Tokyo, in 1968 and 1973, respectively. He joined the Faculty of Engineering of The University of Tokyo in 1973, and in 1975, he moved to the Mechanical Engineering Laboratory, Ministry of International Trade and Industry, Japan, where he served as the Director of the Biorobotics Division. From 1979 to 1980, Dr. Tachi was a Japanese Government Award Senior Visiting Scientist at the Massachusetts Institute of Technology, USA. In 1989, he rejoined The University of Tokyo, and served as Professor at the Department of Information Physics and Computing till March 2009. He served also as Professor and Director of the International Virtual Reality Center at Keio University, Japan from April 2009 till March 2015.

From 1988, he has served as Chairman of the IMEKO Technical Committee on Measurement in Robotics and directed the organization of ISMCR symposia and received IMEKO Distinguished Service Award in 1997. He initiated and founded International Conference on Artificial Reality and Telexistence (ICAT) in 1991 and International-collegiate Virtual Reality Contest (IVRC) in 1993. He received the 2007 IEEE VR Career Award, and served as General Chair of IEEE Virtual Reality Conferences.

Richard Zemel

Prof. Richard Zemel

The University of Toronto, Canada

Title : Learning to Generate Images and Their Descriptions

Abstract:Recent advances in computer vision, natural language processing and related areas has led to a renewed interest in artificial intelligence applications spanning multiple domains. Specifically, the generation of natural human-like captions for images has seen an extraordinary increase in interest. I will describe approaches that combine state-of-the-art computer vision techniques and language models to produce descriptions of visual content with surprisingly high quality. Related methods have also led to significant progress in generating images. The limitations of current approaches and the challenges that lie ahead will both be emphasized.

Bio: Richard Zemel is a Professor of Computer Science at the University of Toronto. Prior to that he was on the faculty at the University of Arizona in Computer Science and Psychology, and a Postdoctoral Fellow at the Salk Institute and at CMU. He received the B.Sc. in History & Science from Harvard, and a Ph.D. in Computer Science from the University of Toronto. His awards and honors include a Young Investigator Award from the ONR, and six Dean's Excellence Awards. He is a Fellow of the Canadian Institute for Advanced Research, and a member of the NIPS Advisory Board. His research interests include topics in machine learning, vision, and neural coding.

Wolfgang Wahlster

Prof. Wolfgang Wahlster

DFKI, Germany

Title : Help me if you can: Towards Multiadaptive Interaction Platforms

Abstract:Autonomous Systems like self-driving cars and collaborative robots must occasionally ask people around them for help in anomalous situations. A new generation of multiadaptive interaction platforms provides a comprehensive multimodal presentation of the current situation in real-time, so that a smooth transfer of control back and forth between human agents and AI systems is guaranteed. We present the anatomy of our multiadaptive human-environment interaction platform which includes explicit models of the attentional and cognitive state of the human agents as well as a dynamic model of the cyber-physical environment, and supports massive multimodality, multiscale and multiparty interaction. It is based on the principles of symmetric multimodality and bidirectional representations: all input modes are also available as output modes and vice versa, so that the system not only understands and represents the user’s multimodal input, but also its own multimodal output. We illustrate our approach with examples from advanced automotive and manufacturing applications.

Bio: Wolfgang Wahlster is the Director of the German Research Center for Artificial Intelligence (DFKI) and a Professor of Computer Science at Saarland University. He has published more than 200 technical papers and 15 books on multimodal human-computer interaction, user modeling, intelligent environments as well as the internet of things and services. He is an AAAI Fellow, an EurAI Fellow, and a GI Fellow. In 2001, the President of Germany presented the German Future Prize to Professor Wahlster for his work on intelligent user interfaces, the highest personal scientific award in Germany. He was elected Foreign Member of the Royal Swedish Nobel Prize Academy of Sciences in Stockholm and Full Member of the German National Academy of Sciences Leopoldina that was founded in 1652. He has been awarded the Federal Cross of Merit, First Class of Germany. He is a member of the Executive Boards of EIT Digital and the International Computer Science Institute (ICSI) at UC Berkeley. In 2013, he received the Donald E. Walker Distinguished Service Award of the International Joint Conferences on Artificial Intelligence for his substantial contributions, as well as his extensive service to the field of Artificial Intelligence throughout his career. He served as the chairman of the international advisory boards of NII and NICT in Japan. He is the editor of Springer’s LNAI series and on the editorial board of various top international CS journals.

ICMI 2016 ACM International Conference on Multimodal Interaction. Copyright © 2015-2017