{"id":1315,"date":"2023-12-04T22:43:12","date_gmt":"2023-12-04T17:13:12","guid":{"rendered":"https:\/\/icmi.acm.org\/2024\/?page_id=1315"},"modified":"2026-03-14T19:10:39","modified_gmt":"2026-03-14T13:40:39","slug":"workshops","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2026\/workshops\/","title":{"rendered":"Workshops"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#4f4f4f&#8221; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#3DBDA8&#8243; header_4_line_height=&#8221;2em&#8221; header_5_text_color=&#8221;#6292C2&#8243; header_5_line_height=&#8221;1.6em&#8221; custom_margin=&#8221;||0px|||&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h4><strong style=\"font-size: 18px;\">Workshops<\/strong><br \/><strong><\/strong><\/h4>\n<h5><span style=\"color: #672b83;\"><strong>GENEA: Generation and Evaluation of Non-verbal Behaviour for Embodied Agents<\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/genea-workshop.github.io\/2026\/workshop\/\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p>The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2026 aims to bring together researchers from diverse disciplines working on different aspects of non-verbal behaviour generation, facilitating discussions on advancing both generation techniques and evaluation methodologies. We invite contributions from fields such as human-computer interaction, machine learning, multimedia, robotics, computer graphics, and social sciences.<\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li>Taras Kucherenko (Electronic Arts, Sweden)<\/li>\n<li>Alice Delbosc (DAVI-Les Humaniseurs,France)<\/li>\n<li>Gustav Eje Henter (KTH Royal Institute of Technology, Sweden)<\/li>\n<li>Oya Celiktutan (King\u2019s College London, UK)<\/li>\n<li>Eneko Atxa Landa (University of the Basque Country, Spain)<\/li>\n<li>Jieyeon Woo (Korea Institute of Machinery and Materials, South Korea)<\/li>\n<li>Haoyang Du (Technological University Dublin, Ireland)<\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong>Beyond a Concrete: Workshop on Grounding Abstract Concepts in Multimodal Interaction<\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/beacon2026.github.io\/\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p>Abstract concepts pose a fundamental challenge for multimodal interactive systems, as they cannot be grounded in single perceptual features or fixed motor patterns; instead, they require abstraction across perception, action, language, and affect. Examples include superordinate categories (e.g., animal, tool), generalised actions (e.g., use, make), and evaluative concepts (e.g., good, appropriate) that depend on the interaction context. This workshop focuses on computational and interactional mechanisms for grounding abstract concepts beyond concrete sensory inputs, bringing together work on multimodal representation learning, embodied and developmental models, robotics, and hybrid cognitive architectures. The workshop aims to consolidate approaches, datasets, and evaluation strategies for abstract concept grounding, identify shared modelling assumptions, and clarify open challenges. Expected outcomes include a comparative discussion of architectures and benchmarks and a post-workshop summary outlining research gaps and future directions, supporting multimodal interactive systems with improved generalisation and interpretability.<\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li>Rahul Singh Maharjan (University of Manchester, UK)<\/li>\n<li>Haodong Xie (University of Manchester, UK)<\/li>\n<li>Niyati Rawal (BITS-Goa, India)<\/li>\n<li>Luca Raggioli (University of Naples Federico II, Italy)<\/li>\n<li>Angelo Cangelosi (University of Manchester, UK)<\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong>Inclusive AI: Rethinking AI-based Multimodal Interaction for Diverse and Underrepresented Users<\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/sites.google.com\/view\/inclusive-ai\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">This workshop marks the inaugural edition of \u201cInclusive AI\u201d, an initiative focused on addressing social interaction challenges in AI systems, with a particular emphasis on inclusivity for diverse and underrepresented user populations. It builds on a growing recognition across the HCI, HRI, and AI communities of the need for cross-disciplinary venues to advance socially interactive AI that is equitable, adaptive, and user-aware.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Giulia Barbareschi (University of Duisburg Essen)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Nataliya Kosmyna (MIT Media Lab and Google)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Maristella Matera (Politecnico di Milano)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Anoop Sinha (Google)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Shruti Sheth (Google)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Micol Spitale (Politecnico di Milano)<\/span><\/span><\/li>\n<li><span style=\"font-size: 13px;\"><span data-sheets-root=\"1\">Alessandro Vinciarelli (University of Glasgow)<\/span><\/span><span style=\"font-size: 13px;\"><\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong>ACCESS-MI: Context-Aware Assistive Agents for Accessible Computing<\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/access-mi.org\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">Many blind and visually impaired individuals still struggle to use computers due to the increasing complexity of modern-day user interfaces. Screen-reader workflows often break and require slow, unreliable workarounds. Recent progress in multimodal machine learning, web automation, and other fields is making it possible to build assistive agents that seamlessly integrate with users&#8217; experiences. In line with ICMI 2026\u2019s theme of context and cultural awareness for multimodal interaction, this workshop focuses on assistive agents for accessible computing in real-world interfaces, where language and cultural variation can affect what works in practice. We bring together researchers working on accessibility, multimodal interaction, and human-centered AI for interactive systems. The workshop includes keynotes, panels, paper presentations, posters, and structured breakout discussions. To further accelerate progress, we also host the Navigate-and-Explain Community Project, a unique ACCESS-MI initiative where participants contribute to an assistive agent that will ultimately be deployed and distributed to visually impaired people in the real world.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span style=\"font-size: 13px;\">Santosh Patapati (Texas, USA)<\/span><\/li>\n<li><span style=\"font-size: 13px;\">Rahul Kumar Mehrotra (Bennett University, Uttar Pradesh, India)<\/span><\/li>\n<li><span style=\"font-size: 13px;\">Trisanth Srinivasan (Texas, USA)<\/span><\/li>\n<li><span style=\"font-size: 13px;\">Sowmya Kirkpatrick (Meta, New York, USA)<\/span><\/li>\n<li><span style=\"font-size: 13px;\">Ashwini Joshi (Warner Bros, Washington, USA)<\/span><\/li>\n<\/ul>\n<p>&#8212;<strong><\/strong><\/p>\n<h5><span style=\"color: #672b83;\"><strong>LaugHSMI: Laughter, Humour, Smiles in Multimodal Interactions<\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/laughsmi.github.io\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">LaugHSMI (Laughter, Humour, Smiles in Multimodal Interactions) is a workshop dedicated to advancing research on the role of laughter, smiles, and humor in human-computer interaction and multimodal communication. These phenomena are fundamental aspects of human social interaction, yet they remain challenging to detect, interpret, and generate in computational systems.<\/span><\/p>\n<p>Laughter and smiling serve multiple communicative functions beyond expressing amusement\u2014they facilitate social bonding, regulate conversation flow, signal understanding, and convey complex emotional states. Humor adds another layer of complexity, involving cognitive, linguistic, and cultural dimensions. Understanding and modeling these phenomena is crucial for creating more natural, engaging, and socially intelligent interactive systems.<\/p>\n<p>This workshop brings together researchers from affective computing, natural language processing, computer vision, speech processing, human-computer interaction, and social signal processing to address the unique challenges posed by laughter, smiles, and humor in multimodal interactions. We aim to foster interdisciplinary dialogue and advance the state-of-the-art in detecting, analyzing, and generating these important social signals.<\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span data-sheets-root=\"1\">Valentin Barriere (University of Chile)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Sofia Callejas (Universit\u00e9 Paris-Saclay &amp; Universidad de Chile)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Vladislav Maraev (University of Gothenburg)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Chiara Mazzocconi (Aix-Marseille Universit\u00e9)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Catherine Pelachaud (Sorbonne University)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Brian Ravenet (Universit\u00e9 Paris-Saclay)<\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong><span data-sheets-root=\"1\">Collective States in Multimodal Interaction<\/span><\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/honda-research-institute.github.io\/Collective_States_in_Multimodal_Interaction\/\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">This workshop explores how multimodal sensing and AI techniques can be utilized to detect and interpret the collective states that emerge in group interactions, in conjunction with individual participant states.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span data-sheets-root=\"1\">Teruhisa Misu (Honda Research Institute USA, Inc.)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Zhaobo Zheng (Honda Research Institute USA, Inc.)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Koji Inoue (Kyoto Universtiy)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Chikara Maeda (Honda Research Institute Japan)<\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong><span data-sheets-root=\"1\">Cross-Cultural Multimodal Interaction (CCMI)<\/span><\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/sites.google.com\/view\/ccmi2026\/home\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">This ICMI 2026 workshop aims to establish an international research platform to explore how linguistic and cultural differences shape nonverbal behavior and interaction dynamics. Building on the success of our first workshop, this second edition shifts the focus from problem identification to concrete action and methodological evaluation. This year&#8217;s discussion-centric workshop focuses on two key themes: (1) Concrete Data Collection &amp; Case Studies: Sharing practical &#8220;Case Reports&#8221; to overcome the logistical hurdles of multi-site data collection, with the goal of drafting a roadmap for truly cross-cultural multimodal datasets. (2) Evaluating MLLMs in Cultural Contexts: Developing methodologies to benchmark Multimodal Large Language Models (MLLMs) for cultural sensitivity, specifically examining their ability to handle cultural nuances in gestures, facial expressions, and turn-taking.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span data-sheets-root=\"1\">Koji Inoue (Kyoto University)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Shogo Okada (Japan Advanced Institute of Science and Technology (JAIST))<\/span><\/li>\n<li><span data-sheets-root=\"1\">Divesh Lala (Kyoto University)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Taiga Mori (Kyoto University)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Sahba Zojaji (The Chinese University of Hong Kong, Shenzhen)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Nancy F. Chen (Agency for Science, Technology, and Research (A*STAR))<\/span><\/li>\n<li><span data-sheets-root=\"1\">Yukiko I. Nakano (Seikei University)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Tatsuya Kawahara (Kyoto University)<\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong><span data-sheets-root=\"1\">The Sixth International Workshop on Automated Assessment of Pain (AAP)<\/span><\/strong><\/span><\/h5>\n<p><a href=\"http:\/\/aap-workshop.net\/\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">Pain typically is measured by patient self-report, but self-reported pain is difficult to interpret and may be impaired or in some circumstances not possible to obtain. For instance, in patients with restricted verbal abilities such as neonates, young children, and in patients with certain neurological or psychiatric impairments (e.g., dementia). Additionally, the subjectively experienced pain may be partly or even completely unrelated to the somatic pathology of tissue damage and other disorders. Therefore, the standard self-assessment of pain does not always allow for an objective and reliable assessment of the quality and intensity of pain. Given individual differences among patients, their families, and healthcare providers, pain often is poorly assessed, underestimated, and inadequately treated. To improve assessment of pain, objective, valid, and efficient assessment of the onset, intensity, and pattern of occurrence of pain is necessary. To address these needs, several efforts have been made in the machine learning and computer vision communities for automatic and objective assessment of pain from video as a powerful alternative to self-reported pain. The workshop aims to bring together interdisciplinary researchers working in the field of automatic multimodal assessment of pain (using video and physiological signals). A key focus of the workshop is the translation of laboratory work into clinical practice.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span data-sheets-root=\"1\">Zakia Hammal (The Robotics Institute, Carnegie Mellon University, USA)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Steffen Walter (University Hospital Ulm, Germany)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Nadia Berthouze (University College London, UK)<\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<h5><span style=\"color: #672b83;\"><strong><span data-sheets-root=\"1\">HeMAI &#8211; Multimodal Interaction with Generative AI Health Applications<\/span><\/strong><\/span><\/h5>\n<p><a href=\"https:\/\/qulab.github.io\/HeMAI2026\/\" target=\"_blank\" rel=\"noopener\">Click here to go to the workshop site<\/a><\/p>\n<p><span data-sheets-root=\"1\">This full-day workshop at ICMI 2026 explores the transformative potential of multimodal interaction for generative AI health applications. The workshop will address key research challenges and opportunities across diverse input and output modalities, including language, speech, vision, gesture, physiological signals, and interactive visualizations. With a focus on interactive, collaborative decision-making, the workshop will cover essential topics such as personalized systems, XAI methods for transparency and trust, synthetic data and simulation, and user-centered design principles for clinical and patient-facing scenarios. Experts and practitioners are invited to join a collaborative environment of paper presentations, panel discussions, and keynote speakers to shape the future of multimodal, intelligent systems in healthcare.<\/span><\/p>\n<h5>Organisers:<\/h5>\n<ul>\n<li><span data-sheets-root=\"1\">Stefan Hillmann (Technische Universit\u00e4t Berlin, Germany)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Sebastian M\u00f6ller (Technische Universit\u00e4t Berlin &amp; DFKI Berlin, Germany)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Catherine Pelachaud (CNRS \u2013 ISIR, Sorbonne University, France)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Lisa Raithel (Technische Universit\u00e4t Berlin \/ BIFOLD &amp; DFKI, Germany)<\/span><\/li>\n<li><span data-sheets-root=\"1\">Roland Roller (DFKI Berlin &amp; Technische Universit\u00e4t Berlin)<\/span><\/li>\n<\/ul>\n<p>&#8212;<\/p>\n<p><strong><\/strong><\/p>\n<ul><\/ul>\n<p><strong><\/strong><\/p>\n<p><strong><\/strong><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong><\/strong><\/p>\n<p><strong><\/strong><\/p>\n<p><b><span style=\"font-weight: 400;\"><\/span><\/b><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<p><span style=\"font-weight: 400;\"><\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Workshops GENEA: Generation and Evaluation of Non-verbal Behaviour for Embodied Agents Click here to go to the workshop site The GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Workshop 2026 aims to bring together researchers from diverse disciplines working on different aspects of non-verbal behaviour generation, facilitating discussions on advancing both generation techniques [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2026\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-1315","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/comments?post=1315"}],"version-history":[{"count":18,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1315\/revisions"}],"predecessor-version":[{"id":2569,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1315\/revisions\/2569"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/media?parent=1315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}