{"id":1266,"date":"2023-11-29T01:09:12","date_gmt":"2023-11-28T19:39:12","guid":{"rendered":"https:\/\/icmi.acm.org\/2024\/?page_id=1266"},"modified":"2026-03-12T14:29:48","modified_gmt":"2026-03-12T08:59:48","slug":"grand-challenges","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2026\/grand-challenges\/","title":{"rendered":"Grand Challenges"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#4f4f4f&#8221; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#3DBDA8&#8243; header_4_line_height=&#8221;2em&#8221; header_5_text_color=&#8221;#6292C2&#8243; header_5_line_height=&#8221;1.6em&#8221; custom_margin=&#8221;||0px|||&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h4><strong>Grand Challenges<\/strong><\/h4>\n<h5><strong>Challenge 1 \u2013 SMART Challenge Series: Context-Aware Student Engagement Detection Challenge<\/strong><\/h5>\n<p><span style=\"color: #cc99ff;\"><strong>WEB SITE: <a href=\"https:\/\/sites.google.com\/nyu.edu\/cased\" target=\"_blank\" rel=\"noopener\" style=\"color: #cc99ff;\">https:\/\/sites.google.com\/nyu.edu\/cased<\/a><\/strong><\/span><\/p>\n<p><strong>Abstract:<\/strong> We are launching the SMART Challenge Series, hosting competitions under the broader focus of the Social Machines and Robotics (SMART) Lab, New York University, Abu Dhabi. This year, we propose the Context-Aware Student Engagement Detection Challenge. Existing engagement detection approaches primarily rely on behavioral and visual cues, such as gaze direction, facial expressions, body posture or clickstream patterns, but often overlook the situational context. This gap may lead to incomplete interpretations of a student&#8217;s engagement. To address this, we introduce a unique multimodal dataset that includes context-aware annotations, enabling more holistic engagement analysis. Our dataset includes in-the-wild video recordings of students attending online classes, paired with metadata which captures their personality traits, as well as the personality traits of the professor, the lecture content via a recording of the shared screen, and video stream of the professor, capturing their lecturing behavior. Each video is annotated with multi-dimensional engagement labels (Confused, Bored, Engaged, Focused, and Interested) based on judgments from crowd-sourced annotators. The dataset also includes a grid view of all the participants in the class, allowing to model group engagement, or to use context from other participants in the class. Participants are expected to develop methods that estimate continuous as well as binary engagement levels and submit a short paper describing their methodology, experiments, and findings. Submissions will be evaluated based on how close a model&#8217;s predictions are to the ground truth values, as well as the overall robustness of the proposed method. A baseline model will be provided to help teams get started. The competition will run for approximately 10 weeks, comprising development and evaluation phases.<\/p>\n<p><strong>Authors:<\/strong> Hanan Salam, Gulshan Sharma, Jialin Li, Shreya Ghosh, Hazim Kemal Ekenel, Albert Ali Salah<\/p>\n<p>&nbsp;<\/p>\n<h5><strong>Challenge 2 \u2013 ERR@HRI 3.0 Challenge: Multimodal Detection of Errors and Anticipation in Human-Robot Interactions<\/strong><\/h5>\n<p><strong><span style=\"color: #cc99ff;\">WEB SITE: <a href=\"https:\/\/sites.google.com\/view\/errhri30\/\" target=\"_blank\" rel=\"noopener\" style=\"color: #cc99ff;\">https:\/\/sites.google.com\/view\/errhri30\/<\/a><\/span><\/strong><\/p>\n<p><strong>Abstract:<\/strong> As robots become increasingly integrated into human environments, their ability to detect and respond to errors remains critical for maintaining user trust and interaction quality. While recent advances in machine learning have improved error detection capabilities, most approaches are limited to specific contexts, controlled settings, or pre-extracted features, limiting their generalizability and applicability to real-world conditions. To address this challenge, the third edition of the ERR@HRI Challenge (ERR@HRI 3.0) provides researchers with two complementary datasets that enable end-to-end innovation in methods for both detecting and preventing errors in human-robot interaction. The challenge offers raw, non-anonymized video data from naturalistic settings: (1) the Bystander Affect Detection (BAD) dataset, containing webcam recordings of 45 participants&#8217; spontaneous reactions to robot and human failure scenarios; and (2) the Bad Idea dataset, featuring 29 participants&#8217; anticipatory facial responses while predicting action outcomes before failures occur. Both datasets were collected via crowdsourcing, capturing the inherent variability of real-world conditions\u2014diverse lighting, camera angles, participant positioning, and environmental contexts. This naturalistic variability, while challenging, provides an authentic testbed for developing robust error detection systems. Participants are invited to develop machine learning models that can generalize across these diverse contexts and temporal stages. Submissions will be evaluated on standard classification metrics (e.g., F1-score) with consideration for real-world deployment constraints. This challenge is a step toward developing robust error detection and prevention systems that can operate in the variable conditions of real-world human-robot collaboration.<\/p>\n<p><strong>Authors:<\/strong> Maria Teresa Parreira, Shiye Cao, Micol Spitale, Amama Mahmood, Maia Stiber, Chien-Ming Huang, Hatice Gunes, Wendy Ju<\/p>\n<p>&nbsp;<\/p>\n<h5><strong>Challenge 3 \u2013 Cross-Cultural Misogynistic Meme Detection<\/strong><\/h5>\n<p><strong><span style=\"color: #cc99ff;\">WEB SITE: <a href=\"https:\/\/sites.google.com\/view\/cc-mmd2026\/\" target=\"_blank\" rel=\"noopener\" style=\"color: #cc99ff;\">https:\/\/sites.google.com\/view\/cc-mmd2026\/<\/a><\/span><\/strong><\/p>\n<p><strong>Abstract:<\/strong> This Grand Challenge introduces CC-MMD, a multilingual and multimodal dataset for misogynistic meme classification across three cultural contexts: Indian, Chinese, and Western (Ireland). The challenge targets cultural robustness in multimodal interaction by evaluating a single binary classification task across culture-specific test partitions. CC-MMD includes memes in English, Tamil, Malayalam, and Chinese, and supports cross-cultural analysis through a standardized annotation pipeline that includes translation and culturally informed labeling.<\/p>\n<p><strong>Authors:<\/strong> Rahul Ponnusamy, Saranya Rajiakodi, Bhuvaneswari Sivagnanam, Anshid Kizhakkeparambil, Ping Du, Paul Buitelaar, Bharathi Raja Chakravarthi<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Grand Challenges Challenge 1 \u2013 SMART Challenge Series: Context-Aware Student Engagement Detection Challenge WEB SITE: https:\/\/sites.google.com\/nyu.edu\/cased Abstract: We are launching the SMART Challenge Series, hosting competitions under the broader focus of the Social Machines and Robotics (SMART) Lab, New York University, Abu Dhabi. This year, we propose the Context-Aware Student Engagement Detection Challenge. Existing engagement [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2026\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-1266","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1266","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/comments?post=1266"}],"version-history":[{"count":21,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1266\/revisions"}],"predecessor-version":[{"id":2538,"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/pages\/1266\/revisions\/2538"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2026\/wp-json\/wp\/v2\/media?parent=1266"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}