Prof. Yvonne Rogers
Professor of Interaction Design, University College London, UK
Bursting our Digital Bubbles: Life Beyond the App
30 years ago, a common caricature of computing was of a frustrated user sat staring at a PC hands hovering over a keyboard and mouse.
Nowadays, the picture is very different. The PC has been largely overtaken by the laptop, the smartphone and the tablet; more and more people are using them extensively and everywhere as they go about their working and everyday lives. Instead the caricature has become one of people increasingly living in their own digital bubbles - heads-down glued to a mobile device, pecking and swiping at digital content with one finger. How can designers and researchers break out of this app mindset to exploit the new generation of affordable multimodal technologies, in the form of physical computing, internet of things, and sensor toolkits, to begin creating more diverse heads-up, hands-on, arms-out user experiences? In my talk I will argue for a radical rethink of our relationship with future technologies.
One that inspires us, through shared devices, tools and data, to be more creative, playful and thoughtful of each other and our surrounding environments.
Yvonne Rogers is a Professor of Interaction Design, the director of UCLIC and a deputy head of the Computer Science department at UCL. Yvonne is the PI at UCL for the Intel Collaborative Research Institute on Sustainable Connected Cities which was launched in October 2012 as a joint collaboration with Imperial College. She was awarded a prestigious EPSRC dream fellowship rethinking the relationship between ageing, computing and creativity. She is a visiting professor at the Open University and Indiana University. She has spent sabbaticals at Stanford, Apple, Queensland University, University of Cape Town, University of Melbourne, QUT and UC San Diego. She has published a monograph (2012) called "HCI Theory: Classical, Modern and Contemporary." From 2006-2011, Yvonne was professor of HCI in the Computing Department at the OU, where she set up the Pervasive Interaction Lab. From 2003-2006, she was a professor in Informatics at Indiana University. Prior to this, she spent 11 years at the former School of Cognitive and Computing Sciences at Sussex University. Yvonne was one of the principal investigators on the UK Equator Project (2000-2007) where she pioneered ubiquitous learning. She has
published widely, and is one of the authors of the definitive textbook on Interaction Design and HCI now in its 3rd edition, that has sold over 150,000 copies worldwide and has been translated into 6 languages. She is a Fellow of the British Computer Society and the ACM's CHI Academy: "an honorary group of individuals who have made substantial contributions
to the field of human-computer interaction. These are the principal leaders of the field, whose efforts have shaped the disciplines and/or industry, and led the research and/or innovation in human-computer interaction.".
Senior Vice President, SAP Strategic Research and Innovations
Member of the SAP Global Leadership Team
Smart Multimodal Interaction Through Big Data
Smart phones and mobile technologies have changed software usage dramatically. Ease of use and simplicity has made software accessible to a huge number of users. The user expectation is that the interaction with business software also becomes as simple as the interaction with consumer software. In particular, through the usage of mobile devices, consumer and business software is coming closer together. The connectivity leads to a huge increase in data. Sensor rich machines, from aircrafts to manufacturing robots, collect vast amount of data, enabling real-time machine health monitoring. During this process, user experience can be enhanced and technician’s work can be simplified with the use of mobile and wearable devices. Not only business software and industries, but also the entire society is affected by this trend. Schools and education will utilize multimodal capabilities, turning schools into smart schools. Virtual classrooms using augmented reality are going to revolutionize education. In addition to enhancing the learning process, it will also catalyze sharing of ideas. Such big data is opening new possibilities and opportunities, which can be used to enhance smart multimodal interaction. In-memory technologies such as SAP HANA can increase context sensitivity and make multimodal interaction smarter. It also provides instant contextual information to increase the ease of use. Next generation software systems and applications will have to enable smart, seamless and contextual multimodal interaction capabilities. New tools, technologies and solutions will be required to increase the ease of use and to build the user experience of the future.
Cafer Tosun is Senior Vice President for Strategic Innovations at SAP. In his role he is responsible for the global strategic innovation programs at SAP. He heads all public research relationship with governments, strategic partners and universities. At the same time Cafer is the responsible executive Managing Director in charge of SAP Innovation Center in Turkey. Prior to his role he served as the Managing Director of SAP’s first Innovation Center and was responsible for the joint projects with the Hasso Plattner Institute for Software Systems Engineering in Potsdam.
Tosun has been with SAP since 1993. In that time, he has held various development, consulting and management positions. He has worked 8 years in Silicon Valley in California. He studied computer science and is a certified project manager from Stanford University.
In 2012, Tosun has been appointed into the MIT Technology Advisory Board. Cafer Tosun is Co-Chair of the Networked European Software and Services Initiative (NESSI). He is an executive board member of the Connected Living Organization and the CEO of the German Turkish Advanced Research Center (GT-ARC) for ICT.
Professor of Computer Technology in the Computer Laboratory at the University of Cambridge
Computation of Emotions
When people talk to each other, they express their feelings
through facial expressions, tone of voice, body postures and
gestures. They even do this when they are interacting with
machines. These hidden signals are an important part of
human communication, but most computer systems ignore
them. Emotions need to be considered as an important
mode of communication between people and interactive systems.
Affective computing has enjoyed considerable success
over the past 20 years, but many challenges remain.
Peter Robinson is Professor of Computer Technology in the Computer Laboratory at the University of Cambridge, where he leads the Rainbow Research Group working on computer graphics and interaction.
Professor Robinson's research concerns problems at the boundary between people and computers. This involves investigating new technologies to enhance communication between computers and their users, and new applications to exploit these technologies. The main focus for this is human-computer interaction, where he has been leading work for over 20 years on the use of video and paper as part of the user interface. The idea is to develop augmented environments in which everyday objects acquire computational properties through user interfaces based on video projection and digital cameras. Recent work has included desk-size projected displays and tangible interfaces.
With rapid advances in key computing technologies and the heightened user expectation of computers, the development of socially and emotionally adept technologies is becoming a necessity. He has led investigations of the inference of people's mental states from facial expressions, vocal nuances, body posture and gesture, and other physiological signals, and also considered the expression of emotions by robots and cartoon avatars.
He has also pursued a parallel line of research into inclusive user interfaces. Collaboration with the Engineering Design Centre has investigated questions of physical handicap, and research students have considered visual handicaps. This has broader applications for interaction with ubiquitous computers, where the input and output devices themselves impose limitations.
Professor Robinson is a Fellow of Gonville & Caius College where he previously studied for a first degree in Mathematics and a PhD in Computer Science under Neil Wiseman. He is a Chartered Engineer and a Fellow of the British Computer Society. Professor Robinson's talk is supported by the ACM Distinguished Speakers Program.
Professor, Computer Science
Carnegie Mellon University
A World without Barriers: Connecting the World across Languages, Distances and Media
As our world becomes increasingly interdependent and
globalization brings people together more than ever, we
quickly discover that it is no longer the absence of
connectivity (the "digital divide") that separates us, but
that new and different forms of alienation still keep us
apart, including language, culture, distance and
interfaces. Can technology provide solutions to bring
us closer to our fellow humans?
In this talk, I will present multilingual and multimodal
interface technology solutions that offer the best of both
worlds: maintaining our cultural diversity and locale
while providing for better communication, greater
integration and collaboration.
We explore: i) Smart phone based speech translators for
everyday travelers and humanitarian missions; ii) Simultaneous translation systems and services
to translate academic lectures and political
speeches in real time (at Universities, the
European Parliament and broadcasting
services); iii) Multimodal language-transparent interfaces
and smartrooms to improve joint and
distributed communication and interaction.
We will first discuss the difficulties of language
processing; review how the technology works today and
what levels of performance are now possible. Key to
today's systems is effective machine learning, without
which scaling multilingual and multimodal systems to
unlimited domains, modalities, accents, and more than
6,000 languages would be hopeless. Equally important
are effective human-computer interfaces, so that
language differences fade naturally into the background
and communication and interaction become natural and
engaging. I will present recent research results as well
as examples from our field trials and deployments in
educational, commercial, humanitarian and government
Alex Waibel is a Professor of Computer Science at Carnegie Mellon University, Pittsburgh and at the University of Karlsruhe (Germany).
He directs interACT, the international Center for Advanced Communication Technologies at both Universities with research emphasis in speech recognition, language processing, speech translation, multimodal and perceptual user interfaces. At Carnegie Mellon, he also serves as Associate Director of the Language Technologies Institute and holds joint appointments in the Human Computer Interaction Institute and the Computer Science Department.
Dr. Waibel was one of the founders of C-STAR, the international consortium for speech translation research and served as its chairman from 1998-2000. His team has developed the JANUS speech translation system, the first American and European Speech Translation system, and a number of multimodal systems including the perceptual Meeting Room, the Meeting recognizer and Meeting Browser. He directed the CHIL program (the largest FP-6 Integrated Project on multimodality) in Europe and the NSF-ITR project STR-DUST in the US.
Dr. Waibel received the B.S. in Electrical Engineering from the Massachusetts Institute of Technology in 1979, and his M.S. and Ph.D. degrees in Computer Science from Carnegie Mellon University in 1980 and 1986. His work on the Time Delay Neural Networks was awarded the IEEE best paper award in 1990. His contributions to multilingual and speech translation systems was awarded the "Alcatel SEL Research Prize for Technical Communication" in 1994, the "Allen Newell Award for Research Excellence" from CMU in 2002, and the Speech Communication Best Paper Award in 2002.
President and Director, Incaa Designs
Beyond Multimodal Interfaces:
Designing Computers that Stimulate Human Thought
During the last decade, cell phones with multimodal interfaces based on combined new media have eclipsed keyboard-based graphical interfaces as the dominant worldwide computer interface. In addition to supporting mobility, multimodal interfaces have been essential for shifting the fulcrum of human-computer interaction much closer to the human- with the impact of far better support for performance. Their emergence is part of the long-term evolution of more expressively powerful input to computers, which includes the ability to convey information in different modalities, representations, and linguistic codes.
A new body of research reframes interface design away from the historical engineering focus on efficiency by instead asking the novel question:
Can a computer input capability per se have an impact on basic human cognition?
That is, if the same person completes the same task and we only change the computer input tool they use, will this alone affect their ability to think and perform well? And if so, how large will the impact be, and why will it occur? Findings on this topic reveal that interfaces can be designed with more expressively powerful input that substantially stimulate basic cognition, including correct problem solving, accurate inferential reasoning, and the production of domain-appropriate ideas - with improvements ranging +9% to +38% in English speakers. These findings generalize across different content domains, user populations, ability levels, types of thinking and reasoning, computer hardware, and evaluation metrics. In this talk, I will summarize how and why this occurs when people use interfaces that can convey multiple modalities and representations.
Regarding future directions, there is an urgent need for new research on designing more expressively powerful interfaces for languages that are not Roman alphabetic (e.g., Hindi, Mandarin). Progress on this topic would have an impact on 80% of worldwide communicators, for whom keyboard input currently poses a major barrier to using technology. Furthermore, the magnitude of advantage is expected to be far larger than that observed in English speakers. Implications of this research will be discussed for designing a new generation of more expressively powerful digital tools that facilitate thinking and reasoning, which will transform computer interface design and educational interfaces in particular.
Sharon Oviatt is internationally known for her multidisciplinary work on multimodal and mobile interfaces, human-centered interfaces, educational interfaces, and design and evaluation. She has published over 150 scientific articles in a wide range of venues. She is an Associate Editor of the main journals and edited book collections in the field of human-centered interfaces, including the journals Human Computer Interaction and ACM Transactions on Intelligent Interactive Systems. She has been the recipient of a National Science Foundation Special Creativity Award for pioneering research on mobile multimodal interfaces. Within ICMI, Sharon was Founding Chair of ICMI's Advisory Board, and the annual ACM international conference series was initiated under her guidance. In 2000, she published the first review of multimodal systems in Human Computer Interaction. She is author of the review on "Multimodal Interfaces" in The Human Computer Interaction Handbook (now in 3rd edition). In 2013, she published The Design of Future Educational Interfaces (Routledge). Her latest textbook, The Paradigm Shift to Multimodality in Contemporary Computer Interfaces (co-authored with Phil Cohen) will be published in 2015. Sharon is the recipient of the inaugural ICMI Sustained Accomplishment Award.