Keynote Speakers

A Brief History of Intelligence

Hsiao-Wuen Hon

Hsiao-Wuen Hon

Corporate Vice President, Microsoft Corporation
Chairman, Microsoft Asia-Pacific R&D Group
Managing Director, Microsoft Research Asia

Abstract: Intelligence is the deciding factor of how human beings become the most dominant life forms on earth. Throughout history, human beings have developed tools and technologies which help civilizations evolve and grow. Computers, and by extension, artificial intelligence (AI), has played important roles in that continuum of technologies. Recently artificial intelligence has garnered much interest and discussion. As artificial intelligence are tools that can enhance human capability, a sound understanding of what the technology can and cannot do is also necessary to ensure their appropriate use. While developing artificial intelligence, we also found out the definition and understanding of our own human intelligence continue evolving. The debates of the race between human and artificial intelligence have been ever growing. In this talk, I will describe the history of both artificial intelligence and human intelligence (HI). From the great insights of the such historical perspectives, I would like to illustrate how AI and HI will co-evolve with each other and project the future of AI and HI.

Bio: Dr. Hsiao-Wuen Hon is Corporate Vice President of Microsoft, Chairman of Microsoft Asia-Pacific R&D Group, and Managing Director of Microsoft Research Asia. Dr. Hon oversees Microsoft’s research and development activities as well as collaborations with academia in Asia Pacific.
An IEEE Fellow and a Distinguished Scientist of Microsoft, Dr. Hon is an internationally recognized expert in speech technology. He serves on the editorial board of the international journal Communications of the ACM. Dr. Hon has published more than 100 technical papers in international journals and at conferences. He co-authored a book, Spoken Language Processing, which is a graduate-level textbook and reference book in the area of speech technology used in many universities all over the world. Dr. Hon holds three dozen patents in several technical areas.
Dr. Hon has been with Microsoft since 1995. He joined Microsoft Research Asia in 2004 as a Deputy Managing Director, and was promoted as Managing Director in 2007. In 2014, Dr. Hon was appointed as Chairman of Microsoft Asia-Pacific R&D Group. In addition, he founded and managed the Microsoft Search Technology Center (STC) from 2005 to 2007 and led development of the Microsoft internet Search product (Bing) in Asia Pacific.
Prior to joining Microsoft Research Asia, Dr. Hon was the founding member and architect of the Natural Interactive Services Division at Microsoft Corporation. Besides overseeing all architectural and technical aspects of the award winning Microsoft® Speech Server product, Natural User Interface Platform and Microsoft Assistance Platform, he is also responsible for managing and delivering statistical learning technologies and advanced search. Dr. Hon joined Microsoft Research as a senior researcher in 1995 and has been a key contributor to Microsoft's SAPI and speech engine technologies. He previously worked at Apple Computer, where he led research and development for Apple's Chinese Dictation Kit.
Dr. Hon received a Ph.D. in Computer Science from Carnegie Mellon University and a B.S. in Electrical Engineering from National Taiwan University.

Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence

Zhengyou Zhang

Zhengyou Zhang

Director, Tencent AI Lab and Tencent Robotics X

Abstract: With the rapid progress in computing and sensory technologies, we will enter the era of human-robot coexistence in the not-too-distant future, and it is time to address the challenges of multimodal interaction. Should a robot take the form of humanoid? Is it better for robots to behave as a second-class citizen or as an equal part of the society as human? Should the communication between human and robot be symmetric or is it okay to be asymmetric? And how about the communication between robots with human presence? What does it mean by emotional intelligence for robots? With the inevitable physical interaction between human and robot, how to guarantee safety? What is the ethical and moral model for robots and how do they follow?

Bio: Zhengyou Zhang received the B.S. degree in electronic engineering from Zhejiang University, Hangzhou, China, in 1985, the M.S. degree in computer science from the University of Nancy, Nancy, France, in 1987, and the Ph.D. degree in computer science in 1990 and the Doctorate of Science (Habilitation à diriger des recherches) in 1994 from the University of Paris XI, Paris, France.
He is a Distinguished Scientist at Tencent, China, since March 2018, and is the Director of Tencent AI Lab and Tencent Robotics X. Before that, he was a Partner Research Manager with Microsoft Research, Redmond, WA, USA, for 20 years. Before joining Microsoft Research in March 1998, he was a Senior Research Scientist with INRIA (French National Institute for Research in Computer Science and Control), France. In 1996-1997, he spent a one-year sabbatical as an Invited Researcher with the Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan.
Dr. Zhang is an ACM Fellow and an IEEE Fellow. He is the Founding Editor-in-Chief of the IEEE Transactions on Cognitive and Developmental Systems, is on the Honorary Board of the International Journal of Computer Vision and on the Steering Committee of the Machine Vision and Applications, and serves or served as an Associate Editor for many journals. He was a General Co-Chair of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. He received the IEEE Helmholtz Test of Time Award at ICCV 2013 for his paper published in 1999 on camera calibration, now known as Zhang’s method.

Socially-Aware User Interfaces: Can Genuine Sensitivity Be Learnt at all?

Elisabeth André

Elisabeth André

Chair of Human-Centered Multimedia, Augsburg University, Germany

Abstract: Recent years have initiated a paradigm shift from pure task-based human-machine interfaces towards socially-aware interaction. Advances in deep learning have led to anthropomorphic interfaces with robust sensing capabilities that come close to or even exceed human performance. In some cases, these interfaces may convey to humans the illusion of a sentient being that cares for them. At the same time, there is the risk that - at some point - these systems may have to reveal their lack of true comprehension of the situative context and the user’s needs with serious consequences to user trust. The talk will discuss challenges that arise when designing multimodal interfaces that hide the underlying complexity from the user, but still demonstrate a transparent and plausible behavior. It will argue for hybrid AI approaches that look beyond deep learning to encompass a theory of mind to obtain a better understanding of the rationale behind human behaviors.

Bio: Elisabeth André is a full professor of Computer Science and Founding Chair of Human-Centered Multimedia at Augsburg University in Germany where she has been since 2001. She has multiple degrees in computer science from Saarland University, including a doctorate. Previously, she was a principal researcher at the German Research Center for Artificial Intelligence (DFKI GmbH) in Saarbrücken.
Elisabeth André has a long track record in multimodal human-machine interaction, embodied conversational agents, social robotics, affective computing and social signal processing. Her work has won many awards including a RoboCup Scientific Award, an Award for Most Innovative Idea at International Conference on Tangible and Embedded Interaction (TEI) or the Most Participative Demo Award at the User Modelling, Adaptation and Personalization Conference (UMAP).
Elisabeth André has served as a General and Program Co-Chair of major international conferences including ACM International Conference on Intelligent User Interfaces (IUI), ACM International Conference on Multimodal Interfaces (ICMI) or International Conference on Autonomous Agents and Multiagent Systems (AAMAS). She has also taken over a leading role as a reviewer or panel chair of large national and international programs, such as the German Excellence Initiative, the European Research Council (ERC) or the National French Artificial Intelligence Research Program.
In 2010, Elisabeth André was elected a member of the prestigious Academy of Europe, and the German Academy of Sciences Leopoldina. To honor her achievements in bringing Artificial Intelligence techniques to Human-Computer Interaction, she was awarded a EurAI fellowship (European Coordinating Committee for Artificial Intelligence) in 2013. Most recently, she was elected to the CHI Academy, an honorary group of leaders in the field of Human-Computer Interaction.
Since 2019, she is serving as the Editor-in-Chief of IEEE Transactions on Affective Computing.

Connecting Humans with Humans: Multimodal, Multilingual, Multiparty Mediation

Alexander Waibel

Alexander Waibel

Professor
Carnegie Mellon University, USA
Karlsruhe Institute of Technology, Germany

Abstract: Behind much of my research work over 4 decades has been the simple observation that people like people and love interacting with other people more than they like interacting with machines. Technologies that truly support such social desires are more likely to be adopted broadly. Consider email, texting, chat rooms, social media, video conferencing, the internet, speech translation, even videogames with a social element (e.g., Fortnite): we enjoy the technology whenever it brings us closer to our fellow humans, instead of imposing attention-grabbing clutter. If so, how then can we build better technologies that improve, encourage, support human- human interaction?

In this talk, I will recount my own story along this journey. When I began, building technologies for the human-human experience, presented formidable challenges: Computer interfaces would need to anticipate and understand the way humans interact, but in 1976, a typical computer had only two instructions to interact with humans: character-in & character-out, and both only supported human-computer interaction.

Over the decades that followed, we began to develop interfaces that can process the various modalities of human communication and we built systems that used several modalities in services to improve human-human interaction. These included:

  • Multimodal smart rooms sporting invisible “butlers” that attempt to anticipate human needs and make implicit wishes come true (project CHIL)
  • Mobile devices that enable cross-lingual dialogs for use in tourism, for humanitarian assistance, healthcare and government services.
  • Simultaneous translation systems and services that interpret University lectures, TV broadcasts, and parliamentary speeches in real time.
  • Simultaneous translation systems and services that interpret University lectures, TV broadcasts, and parliamentary speeches in real time.

In my talk, I will discuss the challenges of interpreting multimodal signals of human- human interaction in the wild. I will show the resulting human-human systems, we developed and how to make them effective. Some went on to become services that affect the way we work and communicate today.

Bio: Alexander Waibel is Professor of Computer Science at Carnegie Mellon University (USA) and at the Karlsruhe Institute of Technology (Germany). He is director of the International Center for Advanced Communication Technologies. Waibel is internationally known for his work on AI, Machine Learning, Multimodal Interfaces and Speech Translation Systems. He proposed early Neural Network based Speech and Language systems, including the TDNN, the first shift-invariant “Convolutional” Network. Combining advances in ML with work on better multimodal interfaces, Waibel and his team developed pioneering solutions to cross- lingual communication, simultaneous interpretation, multimodal smart rooms and human robot collaboration. He published extensively in the field (>800 publication, >30,000 citations, h-index 85), received many awards and founded more than 10 companies in an efforts to transfer academic results to practical deployment. Waibel is a member of the National Academy of Sciences of Germany and a Fellow of the IEEE. He received his BS, MS and PhD degrees from MIT and CMU, respectively.


ICMI 2019 ACM International Conference on Multimodal Interaction. Copyright © 2018-2024