{"id":1266,"date":"2023-11-29T01:09:12","date_gmt":"2023-11-28T19:39:12","guid":{"rendered":"https:\/\/icmi.acm.org\/2024\/?page_id=1266"},"modified":"2024-08-27T01:36:54","modified_gmt":"2024-08-26T20:06:54","slug":"grand-challenges","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2024\/grand-challenges\/","title":{"rendered":"Grand Challenges"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#4f4f4f&#8221; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#3DBDA8&#8243; header_4_line_height=&#8221;2em&#8221; header_5_text_color=&#8221;#6292C2&#8243; header_5_line_height=&#8221;1.6em&#8221; custom_margin=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h4><strong>Grand Challenges<br \/><\/strong><\/h4>\n<h5><strong>EVAC<\/strong><\/h5>\n<p><a href=\"https:\/\/sites.google.com\/view\/evac-challenge2024\/home\" target=\"_blank\" rel=\"noopener\">Go to EVAC site<\/a><\/p>\n<p>As the prevalence of autonomous interactive agents is growing incredibly fast, it becomes increasingly clear that these virtual agents must not only comprehend and respond to our verbal content but also engage with our emotions, which is crucial for enabling more profound interactions. While recent advancements in AI have significantly improved the automatic recognition and understanding of human speech, challenges persist in accurately identifying and addressing the nuances of human emotions. We assume that an empathic virtual agent should excel in at least three key tasks: i) recognising human&#8217;s spontaneous emotional expressions alongside understanding the verbal content, ii) generating appropriate responses in terms of timing and style, and iii) providing insightful feedback while comprehending user responses. In order to accelerate the development of empathic agents, we introduce the first Empathic Virtual Agent Challenge: EVAC. In its inaugural edition, the focus is set on the robust recognition of spontaneous human expressions during interactions with a virtual agent, using the recently introduced THERADIA WoZ dataset. Participants will have to predict the intensity of dimensional or categorical attributes of affect, from audiovisual sequences of human interactions in French with a virtual agent. We encourage the participation from both academics and the industry.<\/p>\n<h5>Organisers<\/h5>\n<ul>\n<li><a href=\"https:\/\/urldefense.com\/v3\/__https:\/\/sites.google.com\/site\/fabienringeval\/home__;!!O7R4XxaP!bDs_uZNMWND5d2tz2x7LrkvJu1zkWxsffZoYRspohC4qDAO-pVMHpa8ePMKbYTL4HT53Iem1KwMGIYiJ8FZ5$\" target=\"_blank\" rel=\"noopener\">Fabien Ringeval<\/a><\/li>\n<li><a href=\"https:\/\/urldefense.com\/v3\/__http:\/\/www.schuller.one\/__;!!O7R4XxaP!bDs_uZNMWND5d2tz2x7LrkvJu1zkWxsffZoYRspohC4qDAO-pVMHpa8ePMKbYTL4HT53Iem1KwMGIb1y3AZY$\" target=\"_blank\" rel=\"noopener\">Bj\u00f6rn Schuller<\/a><\/li>\n<li><a href=\"https:\/\/urldefense.com\/v3\/__https:\/\/www.gipsa-lab.grenoble-inp.fr\/*gerard.bailly\/__;fg!!O7R4XxaP!bDs_uZNMWND5d2tz2x7LrkvJu1zkWxsffZoYRspohC4qDAO-pVMHpa8ePMKbYTL4HT53Iem1KwMGIRUqrQb5$\" target=\"_blank\" rel=\"noopener\">G\u00e9rard Bailly<\/a><\/li>\n<li><a href=\"https:\/\/urldefense.com\/v3\/__https:\/\/www.linkedin.com\/in\/hippolyte-fournier-a3570315a\/?originalSubdomain=fr__;!!O7R4XxaP!bDs_uZNMWND5d2tz2x7LrkvJu1zkWxsffZoYRspohC4qDAO-pVMHpa8ePMKbYTL4HT53Iem1KwMGIZ3eCf9N$\" target=\"_blank\" rel=\"noopener\">Hippolyte Fournier<\/a><\/li>\n<li><a href=\"https:\/\/urldefense.com\/v3\/__https:\/\/www.linkedin.com\/in\/safaa-azzakhnini-b8092065\/?originalSubdomain=fr__;!!O7R4XxaP!bDs_uZNMWND5d2tz2x7LrkvJu1zkWxsffZoYRspohC4qDAO-pVMHpa8ePMKbYTL4HT53Iem1KwMGIfKV-6K7$\" target=\"_blank\" rel=\"noopener\">Safaa Azzakhnini<\/a><\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h5><strong>ERR<\/strong><\/h5>\n<p><a href=\"https:\/\/sites.google.com\/cam.ac.uk\/err-hri\/home\" target=\"_blank\" rel=\"noopener\">Go to ERR site<\/a><\/p>\n<p>Human-Robot Interaction (HRI) research is currently placing a greater emphasis on the development of autonomous robots that can be deployed in real-world scenarios to understand the implications of integrating such robots in our lives. However, past literature has shown that such autonomous robots are often characterised by making mistakes, for example when the robot interrupted people or when the robot took a very long time to respond. Such robot failures may disrupt the interaction and negatively impact the perception of people towards the robot. To overcome this problem, robots should be able to detect HRI failure.<br \/>The ERR@HRI challenge aims at addressing the problem of failure detection in human-robot interaction (HRI) by providing the community with the means to benchmark efforts for mono-modal vs. multi-modal robot failure detection in HRI. Upon participants acceptance of the ERR@HRI terms and condition by signing the End User Licence Agreement (EULA) , we will share with them a dataset that includes multimodal non-verbal feature statistics (i.e., facial, speech, and pose features) of interaction clips where individuals interact with a robotic coach delivering positive psychology exercises, and labels. Audio-video recordings will not be provided due to anonymity and ethical requirements. The feature statistics and labels will be used to train and evaluate the predictive models. The dataset has been annotated as a time-series with the following labels: robot mistake (e.g., interruption or non-responding, (0) absent, (1) present), user awkwardness (e.g., when the coachee feels uncomfortable interacting with the robot without any robot mistakes, (0) absent, (1) present), and interaction ruptures (i.e., either when the user displays some cues of awkwardness towards the robot and\/or when the robot makes some mistakes; (0) absent, (1) present).<br \/>We invite participants to collaborate in teams to submit their multi-modal ML model for evaluation, which will be benchmarked based on various performance metrics, including accuracy, precision, recall, F1 score, and timing-based metrics in detecting robot failures.<\/p>\n<p>&nbsp;<\/p>\n<h5>Organisers<\/h5>\n<ul>\n<li>Micol Spitale<br \/>Assistant Professor, DEIB, Politecnico di Milano &amp; Visiting Affiliated Researcher, Department of Computer Science and Technology, University of Cambridge<\/li>\n<li>Maria Teresa Parreira<br \/>PhD Student at the Information Science Department at Cornell University<\/li>\n<li>Maia Stiber<br \/>PhD Student at Johns Hopkins University, USA<\/li>\n<li>Chien-Ming Huang<br \/>Assistant Professor at Johns Hopkins University, USA<\/li>\n<li>Wendy Ju<br \/>Associate Professor of Information Science at the Jacobs Technion-Cornell Institute at Cornell Tech<\/li>\n<li>Malte Jung<br \/>Associate Professor in Information Science at Cornell University<\/li>\n<li>Hatice Gunes<br \/>Full Professor of Affective Intelligence &amp; Robotics at the University of Cambridge, United Kingdom<\/li>\n<\/ul>\n<h5><strong><\/strong><\/h5>\n<h5><strong><\/strong><\/h5>\n<h5><strong>EmotiW 2024<\/strong><\/h5>\n<p>This challenge has been cancelled.<strong><\/strong><strong><\/strong><\/p>\n<p><a href=\"https:\/\/sites.google.com\/view\/emotiw2024\" target=\"_blank\" rel=\"noopener\">Go to EmotiW site<\/a><\/p>\n<p>The Tenth Emotion Recognition in the Wild 2024 Grand Challenge consists of a half day event with a focus on affective sensing in unconstrained conditions and an audio-video based news-reader emotion classification sub-challenge and an engagement prediction sub-challenge, which mimic the real-world conditions. The details on the challenge can be accessed at sites.google.com\/view\/emotiw2024<\/p>\n<h5>Organisers<\/h5>\n<ul>\n<li>Abhinav Dhall, Flinders University<\/li>\n<li>Shreya Ghosh, Curtin University<\/li>\n<li>Roland Goecke, UNSW<\/li>\n<li>Tom Gedeon, Curtin University<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Grand Challenges EVAC Go to EVAC site As the prevalence of autonomous interactive agents is growing incredibly fast, it becomes increasingly clear that these virtual agents must not only comprehend and respond to our verbal content but also engage with our emotions, which is crucial for enabling more profound interactions. While recent advancements in AI [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2024\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-1266","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1266","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/comments?post=1266"}],"version-history":[{"count":14,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1266\/revisions"}],"predecessor-version":[{"id":1861,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1266\/revisions\/1861"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/media?parent=1266"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}