|
|
Keynote Speakers
From Differentiable Reasoning to Self-supervised Embodied Active Learning
Russ Salakhutdinov
Professor of Computer Science
Microsoft Faculty Fellow
Sloan Fellow
Carnegie Mellon University
Abstract: In this talk, I will first discuss deep learning models that can find semantically meaningful
representations of words, learn to read documents and answer questions about their content. I will introduce
methods that can augment neural representation of text with structured data from Knowledge Bases (KBs) for
question answering, and show how we can answer complex compositional questions over long structured documents
using a text corpus as a virtual KB. In the second part of the talk, I will show how we can design modular
hierarchical reinforcement learning agents for visual navigation that can handle multi-modal inputs, perform
tasks specified by natural language instructions, perform efficient exploration and long-term planning, build
and utilize 3D semantic maps to learn both action and perception models in self-supervised manner, while
generalizing across domains and tasks.
Bio: Russ Salakhutdinov is a UPMC Professor of Computer Science in the Department of Machine Learning at
CMU. He received his PhD in computer science from the University of Toronto. After spending two post-doctoral
years at MIT, he joined the University of Toronto and later moved to CMU. Russ's primary interests lie in deep
learning, machine learning, and large-scale optimization. He is an action editor of the Journal of Machine
Learning Research, served as a program co-chair for ICML2019, served on the senior programme committee of
several top-tier learning conferences including NeurIPS and ICML. He is an Alfred P. Sloan Research Fellow,
Microsoft Research Faculty Fellow, Canada Research Chair in Statistical Machine Learning, a recipient of the
Early Researcher Award, Google Faculty Award, and Nvidia's Pioneers of AI award.
Incorporating haptics into the theatre of multimodal experience design;
and the ecosystem this requires
Karon MacLean
Professor, Computer Science
Director, UBC Designing for People Research Cluster
University of British Columbia, Canada
Abstract: When novice - and sometimes expert - hapticians need to ideate about how to implement haptic
media in given applications, they often struggle to get beyond variations of vibrotactile notification or
directional guidance, even given examples — of alternative framings of how tactile and force sensations could be
utilized, of how such sensations can be delivered and what they can feel like. Why is our imagination of haptic
technology so limited, when touch in the “real” world is bogglingly rich and essential? What stands in the way
of innovation in how we use haptics in multimodal design, as the technology itself becomes more mature and
diverse? How can we expand our vision of the roles it could take in the multimodal theatre of a designed
experience?
I trace these questions to four major gaps: in (I) Inspiration - the lack of interesting
examples available to most of us; (II) Theory - of diverse ways to conceptualize the role of
haptics in UX design; (III) Process - the many challenges of working with the technology itself
and integrating it into multimodal workflows; and (IV) Value - the difficulty of making a
hard-edged business case for an element which often enriches rather than enables. To discuss both these gaps and
approaches to surmounting them, I will draw on decades of design experience in my group as well as work with
expert and novice hapticians and industry leaders, framed in the rich use cases of learning technology and
mental health applications.
Bio: Karon MacLean is Professor in Computer Science at UBC, with degrees in Biology and Mechanical
Engineering (BSc, Stanford; M.Sc. / Ph.D, MIT) and time spent as a professional robotics engineer (Center for
Engineering Design, University of Utah) and haptics / interaction researcher (Interval Research, Palo Alto). At
UBC since 2000, MacLean's research is at the intersection of robotics, human-computer and human-robot
interaction (HCI and HRI), psychology and social practices. She is most known for her work in communicating
functional and affective/emotional information through our sense of touch (haptics), and in supporting haptic
and multimodal design. She has contributed design practices, inventions, and findings in cognition, affective
modelling and complex sociotechnical systems, and acted as a bridge between dispersed haptic communities from
robotics and human-computer interaction. With her group, MacLean has published over 150 peer-reviewed
publications, many of them garnering awards. She has received distinctions such as the Charles A McDowell Award
(UBC’s highest research award), was named an IEEE Distinguished Lecturer (2019) and placed in the “Top 30 Women
in Robotics” in 2020. As a leader in her respective fields, MacLean co-founded the IEEE Transactions on Haptics,
2008), reinvented top conferences as their general chair (IEEE HAPTICS, 2012; ACM Virtual UIST, 2020), advises
on numerous international academic and industry boards, and has led award juries for all major conferences in
her area. She is currently Special Advisor, Innovation and Knowledge Mobilization to UBC’s Faculty of Science.
MacLean founded and directs UBC’s multi-disciplinary Designing for People (DFP) Research Cluster and NSERC
CREATE training program (25 researchers spanning 11 departments and 5 faculties - dfp.ubc.ca), which has
transformed UBC’s HCI presence worldwide, and the practice of researchers across campus.
Theory Driven Approaches to the Design of Multimodal Assessments of Learning, Emotion, and Self-Regulation in
Medicine
Susanne P. Lajoie, FRSC
Professor of Educational and
Counselling Psychology
Canadian Research Chair Tier 1, Advanced Technologies for Learning Authentic Settings
McGill University
Abstract: Psychological theories can inform the design of technology rich learning environments (TREs)
to provide better learning and training opportunities. Research shows that learners do better when interacting
with material that is situated in meaningful, authentic contexts. Recently, psychologists are interested in the
role that emotion plays in learning with technology. Lajoie investigates the situations under which technology
works best to facilitate learning and performance by examining the relations between cognition (problem solving,
decision making), metacognition (self-regulation) and affect (emotion, beliefs, attitudes, interests, etc.) in
medicine. Examples of advanced technologies to support medical students during critical thinking and problem
solving, collaboration, and communication will be presented along with a description of multimodal methodologies
for assessing the relationship between affect and learning in medical contexts. These methodologies include
physiological and behavioral indices, think aloud protocols, eye tracking, self report, etc. Examples will be
presented of how TREs can determine when learners are engaged and happy as opposed to bored and angry while
learning. Findings from this type of research helps identify the best way to tailor the learning experience to
the cognitive and affective needs of the learner.
Bio: Professor Lajoie is a Canada Research Chair in Advanced Technologies for Learning in Authentic
Settings in the Department of Educational and Counselling Psychology and a member of the Institute for Health
Sciences Education at McGill University. She is a Fellow of the Royal Society of Canada, the American
Psychological Association as well as the American Educational Research Association (AERA). She received the
ACFAS Thérèse Gouin-Décarie Prize for Social Sciences along with the AERA-TICL Outstanding International
Research Collaboration Award. Dr. Lajoie directs the Learning Environments Across Disciplines partnership grant
funded by the Social Sciences and Humanities Research Counsel in Canada. Dr. Lajoie explores how theories of
learning and affect can be used to guide the design of advanced technology rich learning environments to promote
learning in medicine.
Socially Interactive Artificial Intelligence: Past, Present and Future
Elisabeth André
Chair for Human-Centered
Artificial Intelligence
University of Augsburg, Augsburg, Germany
Abstract: Socially interactive artificial agents are no longer mere fiction. For many, they are already
part of
everyday life. Due to technical advances in multimodal behavior analysis and synthesis, the
asymmetry of communication between machines and humans is dissolving. Consequently, the
interaction with robots and virtual characters has become more intuitive and natural, particularly
for everyday users. Nevertheless, there is still some work to be done until artificial agents are
able to smoothly interact with people over more extended periods in their homes and to cope
with unforeseen situations.
In my talk, I will recall my journey into the field of socially interactive Artificial Intelligence
starting in the 90ies with the development of the Personalized Plan-Based Presenter (in short, the
PPP Persona). This cartoon character explained technical devices to users by combining speech,
gestures, and facial expressions. We quickly realized that we had to equip such characters with a
certain amount of social and emotional intelligence to keep users engaged over a more extended
period. Furthermore, it became clear that creating such agents is not a job that can be done by
computer scientists alone. In collaboration with social and medical sciences, dramaturgy, and
media art colleagues, we developed a wide range of applications with socially interactive
characters or robots over the past years, including art and entertainment, cultural training and
social coaching, and more recently, personal wellbeing and health.
In my talk, I will describe various computational methods to implement socially interactive
behaviors in artificial agents. Besides analytic methods informed by theories from the cognitive
and social sciences, I will discuss empirical approaches that enable an artificial agent to learn
socially interactive behaviors from recordings of human-human interactions or life interactions
with human interlocutors. I will highlight opportunities and challenges that arise from neural
behavior generation approaches that promise to achieve the next level of human-likeness in
virtual agents and social robots. Finally, I will share lessons we learnt during the development of
socially interactive agents. To benefit users, we do not just have to work on technical solutions,
but go beyond disciplinary boundaries to encompass ethical, legal, and social implications of
employing such agents.
Bio: Elisabeth André is a full professor of Computer Science and Founding Chair of Human-Centered
Artificial Intelligence at Augsburg University in Germany. She has a long track record in
multimodal human-machine interaction, embodied conversational agents, social robotics,
affective computing and social signal processing. Her work has won many awards including the
Gottfried Wilhelm Leibnitz Prize 2021, with 2.5 Mio € the highest endowed German research
award. In 2010, Elisabeth André was elected a member of the prestigious Academy of Europe,
and the German Academy of Sciences Leopoldina. In 2017, she was elected to the CHI
Academy, an honorary group of leaders in the field of Human-Computer Interaction. To honor
her achievements in bringing Artificial Intelligence techniques to Human-Computer Interaction,
she was awarded a EurAI fellowship (European Coordinating Committee for Artificial
Intelligence) in 2013. In 2019, she was named one of the 10 most influential figures in the
history of AI in Germany by the National Society for Informatics (GI). Since 2019, she is serving
as the Editor-in-Chief of IEEE Transactions on Affective Computing.
|
|
|
|