Keynote Speakers
Prof. Julien Epps
Dean of the UNSW Faculty of Engineering, and Professor of Electrical Engineering and Telecommunications, University of New South Wales

Title:
Multimodal Task Analysis in Wearable Contexts
Abstract:
Accurately describing the tasks that comprise a day in your life is an inherently multimodal characterisation: after beginning each task, you become loaded to some extent by the objects, movements, communication and/or mental challenges required by that task, then you later switch to a new task, and so on. Automating task analysis of this kind, which to date is manual, post-hoc and subjective, has proven challenging. Wearable systems provide the helpful opportunity to position heterogeneous non-invasive sensors directly where they are most useful for task analysis: on the head and close to the eyes and mouth. In a longitudinal context, continuous data acquisition from these sensors is typically inefficient because the most interesting changes tend to occur infrequently, and there is scope for more investigation of automatic event-based analysis approaches inspired by the way humans annotate multimodal data. This presentation focuses on key research problems and recent results spanning psychophysiological motivations, feature extraction, including the accurate extraction of eye action units from very small near-field infrared cameras mounted on glasses frames, feature variability, multimodal fusion, machine learning, and system design for continuous and robust automatic task analysis from wearable sensors. It also highlights examples of how multimodal analysis of interpretable event-based approaches can yield new insights and new machine learning research directions. Task analytics of this kind represent huge potential for individual users to empower themselves and interact more seamlessly with machines.
Bio:
Julien Epps received his BE (Hons I) and PhD degrees in Electrical Engineering from the University of New South Wales. Following roles as a Senior Research Engineer at Motorola Labs and then as a Senior Researcher and Project Leader at National ICT Australia, he was appointed as a Senior Lecturer at UNSW Electrical Engineering and Telecommunications from 2007, Associate Professor from 2014, Professor and Head of School from 2019, and Dean of the UNSW Faculty of Engineering in 2023. He is also a Co-Director of the NSW Smart Sensing Network (NSSN), and has held roles as a Contributed Principal Researcher at Data61, CSIRO, and Scientific Advisor for Boston-based startup Sonde Health. He has delivered invited tutorials to major international conferences such as INTERSPEECH and IEEE SMC and invited keynotes to IEEE ASRU and workshops of ACII, ACM Multimedia and ACM ICMI. He serves as an Associate Editor for IEEE Transactions on Affective Computing, and was a member of the Advisory Board of ICMI and the Executive Committee of AAAC, following roles as Program Chair for ICMI 2012, General Chair for ICMI 2013, and various organisational roles with ICMI since 2014.
(Photo from UNSW profile)
Prof. Liming Zhu
Research Director, CSIRO’s Data61
Conjoint Professor, UNSW

Title:
Designing for Meaningful Oversight: Human and Organisational Agency in Multimodal AI Systems
Abstract:
As multimodal AI systems increasingly operate through diverse sensory inputs, tool use, and autonomous workflows, ensuring safety and responsibility requires more than simply placing a human “in the loop” or assigning liability to an organisation. This talk reconsiders the idea of meaningful oversight—not as symbolic presence or bureaucratic rubber-stamping, but as real agency and influence—by examining how both individuals and organisations can exert practical control over system behaviour. Drawing on ideas such as the lowest-cost avoider principle, where responsibility falls to those best positioned to prevent harm, and capability-based governance, which ties responsibility to technical control rather than formal roles, the talk invites reflection on how influence and accountability should be structured. From an engineering perspective, we’ll explore how system-level design choices—such as intervention points, safeguards, and reasoning trace monitoring—can support more substantive oversight in practice. By highlighting common design patterns that encode these principles, the talk presents a systems-oriented view of safe and responsible multimodal interaction, where accountability is not an afterthought, but something designed into the architecture itself.
Bio:
Dr Liming Zhu is a Research Director at CSIRO’s Data61, the AI/digital arm of Australia’s national science agency, and a conjoint professor at UNSW. He contributes to the OECD.AI’s AI Risks and Accountability, the Responsible AI at Scale think tank at Australia’s National AI Centre, ISO AI standards committees, and Australia’s AI safety standard. His research division innovates in AI engineering, responsible/safe AI, blockchain, quantum software, privacy, and cybersecurity, and hosted Australia’s Consumer Data Right/Open Banking standards setting. Dr Zhu has authored over 300 papers and is a regular keynote speaker. He delivered the keynote “Software Engineering as the Linchpin of Responsible AI” at the International Conference on Software Engineering. His latest book, “Responsible AI: Best Practices for Creating Trustworthy AI Systems” and “Engineering AI Systems: Architecture and DevOps Essentials,” reflect his vision for the rigorous engineering of responsible and safe AI systems for society.
(Photo from LinkdIn profile)
Prof. Fang Chen
Distinguished Professor, Executive Director, UTS Data Science Institute (DSI), University of Technology Sydney (UTS)

Title:
Multimodal AI for Transforming Industries and Empowering Social Interaction
Abstract:
In this keynote, I will explore how multimodal artificial intelligence (AI) is transforming key sectors, including water, transport, agriculture, and healthcare, by integrating diverse data streams to drive innovation and deliver measurable business and societal impact. By harnessing multimodal information such as sound, vision, physiological signals, behavioural patterns, and both structured and unstructured data, AI is increasingly capable of supporting complex decision-making and enriching human-machine interactions.
Drawing on real-world deployments, I will demonstrate how multimodal AI enhances operational efficiency, enables adaptive learning and decision-making, and fosters more responsive, intelligent systems. These applications not only improve productivity and resilience but also create new opportunities for sustainable growth and inclusive societal benefit.
Rooted in interdisciplinary research spanning AI, human-computer interaction, data science, behavioural science, neuroscience, and more, this talk will highlight technological advances while also addressing the ethical and practical challenges of deploying AI in real-world contexts. By focusing on innovative data-driven and human-centric solutions, we can unlock the full potential of multimodal AI to transform industries, empower individuals, and shape a more connected and sustainable future.
Bio:
Distinguished Professor Fang Chen is a globally recognised leader in artificial intelligence, data science, and human-machine collaboration. Her career spans influential leadership roles across academia, government, and industry, including senior positions at Intel, Motorola, CSIRO, and her current role as Executive Director of the Data Science Institute at the University of Technology Sydney.
Her research focuses on the research and development of data-driven, human-centric, and ethically aligned AI systems to address complex challenges across sectors such as transport, water, energy, agriculture, healthcare, telecommunications, finance, and beyond. Her pioneering contributions in multimodal AI, machine learning, and cognitive modelling have delivered real-world impact through advanced decision support systems, operational optimisation, and large-scale predictive analytics.
Her achievements have been recognised with numerous prestigious honours, including the Australian Museum Eureka Prize for Excellence in Data Science in 2018, widely regarded as the “Oscar” of Australian science, the NSW Premier’s Prize for Science and Engineering in 2021, the IFIP Brian Shackle Award in 2017 for international impact in human-computer interaction, the Australia and New Zealand Women in AI Award, the Australian Financial Review AI Award in Sustainability in 2025, recognition as AWA NSW Water Professional of the Year in 2016, and multiple research and innovation awards across Intelligent Transport Systems, the iAwards, and more.
Distinguished Professor Chen is a trusted advisor to national and international bodies shaping innovation and the future of responsible technologies. She serves on the Industry Innovation and Science Australia Board, the NSW Government AI Review Committee, the Singapore National Research Foundation expert panel, and the Advisory Board of the ACM Journal on Responsible Computing. She also chairs the ACM Intelligent User Interfaces Steering Committee and has held advisory roles with ITS Australia, several startup boards, and as a venture partner in a deep tech venture fund.
She has authored more than 400 peer-reviewed publications, written several influential books, and received multiple Best Paper Awards at top-tier international conferences. She holds over 30 patents across eight countries and has delivered more than 200 invited and keynote presentations, including TEDx talks. Her work continues to shape the future of technologies through research excellence, impactful innovation, and global policy leadership for the public good.
Keynote by the Sustained Accomplishment Award Winner
Prof. Stephen Brewster
Professor of Human-Computer Interaction, School of Computing Science, University of Glasgow

Title:
From audio, through haptics to augmented reality: travels in multimodal interaction
Abstract:
In this Sustained Accomplishment Award talk, I will discuss the journey of my work throughout the area of multimodal interaction. I started my research career studying non-speech audio and designing Earcons for everyday interactions. Learning from audio, designing Tactons, or tactile icons, was next. Here the focus was on the newly emerging area of mobile phones and improving usability when visual displays were small. From there, I expanded to thermal feedback and ultrasound haptics, expanding the possibilities of touch-based interaction. Most recently, my work took a different direction, looking at virtual and augmented reality for passengers, and unexplored area for interaction design. In this talk, I will discuss what I learned about interaction in each of these modalities and how this can be applied to designing successful multimodal interfaces in the future.
Bio:
Stephen Brewster is a Professor of Human-Computer Interaction in the School of Computing Science at the University of Glasgow. He got his PhD in auditory interface design at the University of York. At Glasgow, he leads the Multimodal Interaction Group, which is very active and has a strong international reputation in HCI (http://mig.dcs.gla.ac.uk). His research focuses on multimodal HCI, or using multiple sensory modalities and control mechanisms (particularly audio, haptics and gesture) to create a rich, natural interaction between human and computer. His work has a strong experimental focus, applying perceptual research to practical situations. A long-term focus has been on mobile interaction and how we can design better user interfaces for users who are on the move. Other areas of interest include haptics, wearable devices and in-car interaction. He pioneered the study of non-speech audio and haptic interaction for mobile devices with work starting in the 1990’s. He currently holds an ERC Advanced Grant in the area of AR/VR for passengers. He was a General Chair of CHI 2019 in Glasgow, CHI papers chair in 2013 and 2014, and has previously chaired MobileHCI, EuroHaptics and TEI. He is a member of the ACM SIGCHI Academy, an ACM Distinguished Speaker and a Fellow of the Royal Society of Edinburgh.
(Photo from VAM Realities Network profile)

