Louis-Philippe Morency

Dr. Louis-Philippe Morency

Assistant Professor in the Language Technology Institute, Carnegie Mellon University, USA

Title: Multimodal Machine Learning

Abstract: Multimodal machine learning is a vibrant multi-disciplinary research field which addresses some of the original goals of artificial intelligence by integrating and modeling multiple communicative modalities, including linguistic, acoustic and visual messages. With the initial research on audio-visual speech recognition and more recently with language & vision projects such as image and video captioning, this research field brings some unique challenges for multimodal researchers given the heterogeneity of the data and the contingency often found between modalities. This course will teach fundamental concepts related to multimodal machine learning including multimodal alignment and fusion, heterogeneous representation learning and multi-stream temporal modeling. We will also review recent state-of-the-art probabilistic models and computational algorithms for multimodal machine learning and discuss the current and upcoming challenges.

Bio: Louis-Philippe Morency is Assistant Professor in the Language Technology Institute at the Carnegie Mellon University where he leads the Multimodal Communication and Machine Learning Laboratory (MultiComp Lab). He received his Ph.D. and Master degrees from MIT Computer Science and Artificial Intelligence Laboratory. In 2008, Dr. Morency was selected as one of "AI's 10 to Watch" by IEEE Intelligent Systems. He has received 7 best paper awards in multiple ACM- and IEEE-sponsored conferences for his work on context-based gesture recognition, multimodal probabilistic fusion and computational models of human communication dynamics. For the past three years, Dr. Morency has been leading a DARPA-funded multi-institution effort called SimSensei which was recently named one of the year's top ten most promising digital initiatives by the NetExplo Forum, in partnership with UNESCO.

ICMI 2016 ACM International Conference on Multimodal Interaction. Copyright © 2015-2024