{"id":368,"date":"2022-04-01T07:58:44","date_gmt":"2022-04-01T02:28:44","guid":{"rendered":"https:\/\/icmi.acm.org\/2022\/?page_id=368"},"modified":"2022-08-26T03:38:31","modified_gmt":"2022-08-25T22:08:31","slug":"workshops","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2022\/workshops\/","title":{"rendered":"Workshops"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||5px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; custom_padding=&#8221;4px||6px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; link_text_color=&#8221;#0C71C3&#8243; ol_font=&#8221;|600|||||||&#8221; header_3_text_color=&#8221;#083f87&#8243; header_4_text_color=&#8221;#072f93&#8243; header_4_font_size=&#8221;16px&#8221; header_5_text_color=&#8221;#085593&#8243; header_6_text_color=&#8221;#d31d1d&#8221; header_6_line_height=&#8221;1.4em&#8221; text_orientation=&#8221;justified&#8221; custom_margin=&#8221;||18px|||&#8221; hover_enabled=&#8221;0&#8243; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a0a0a0&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<h3><strong>Workshops<\/strong><\/h3>\n<ol>\n<li style=\"text-align: left;\"><a href=\"#1\"><strong>Workshop on Multimodal Affect and Aesthetic Experience<\/strong><\/a><a href=\"#2\"><strong><\/strong><\/a><\/li>\n<li style=\"text-align: left;\"><a href=\"#3\"><strong>The GENEA Workshop 2022: The 3<sup style=\"font-size: 10px!important;\">rd<\/sup> Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents<\/strong><\/a><\/li>\n<li style=\"text-align: left;\"><a href=\"#4\"><strong>2<sup style=\"font-size: 10px!important;\">nd<\/sup> International Workshop on Deep Video Understanding<\/strong><\/a><\/li>\n<li style=\"text-align: left;\"><a href=\"#5\"><strong>MSECP-Wild: The 4<sup style=\"font-size: 10px!important;\">th<\/sup> Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild<\/strong><\/a><\/li>\n<li style=\"text-align: left;\"><a href=\"#6\"><strong>3<sup style=\"font-size: 10px!important;\">rd<\/sup> Workshop on Social Affective Multimodal Interaction for Health (SAMIH)<\/strong><\/a><strong><\/strong><\/li>\n<li style=\"text-align: left;\"><strong><a href=\"#7\">3<sup style=\"font-size: 10px!important;\">rd<\/sup> Workshop on Bridging Social Sciences and AI for Understanding Child Behavior<\/a><\/strong><\/li>\n<\/ol>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;1&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5 style=\"text-align: justify;\"><strong>Workshop on Multimodal Affect and Aesthetic Experience<\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Abstract<\/strong><\/h5>\n<p style=\"text-align: justify;\">The term \u201caesthetic experience\u201d corresponds to the inner state of a person exposed to the form and content of artistic objects. Quantifying and interpreting the aesthetic experience of people in different contexts can contribute towards (a) creating context and (b) better understanding people\u2019s affective reactions to different aesthetic stimuli. Focusing on different types of artistic content, such as movies, music, literature, urban art, ancient artwork, and modern interactive technology, the goal of this workshop is to enhance the interdisciplinary collaboration among researchers coming from the following domains: affective computing, aesthetics, human-robot\/computer interaction, digital archaeology and art, culture, addictive games.<\/p>\n<h5 style=\"text-align: justify;\"><strong>Website<\/strong><\/h5>\n<p style=\"text-align: justify;\"><span><a href=\"https:\/\/sites.google.com\/view\/maae2022\/home\">https:\/\/sites.google.com\/view\/maae2022\/home<\/a><\/span><\/p>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Organizers<\/strong><\/h5>\n<ul>\n<li style=\"text-align: justify;\">Theodoros Kostoulas (University of the Aegean, Greece)<\/li>\n<li style=\"text-align: justify;\">Michal Muszynski (IBM Research Europe, Switzerland)<\/li>\n<li style=\"text-align: justify;\">Leimin Tian (Monash University, Australia)<\/li>\n<li style=\"text-align: justify;\">Edgar Roman-Rangel (Instituto Tecnologico Autonomo de M\u00e9xico, Mexico)<\/li>\n<li style=\"text-align: justify;\">Theodora Chaspari (Texas A&amp;M University, USA)<\/li>\n<li style=\"text-align: justify;\">Panos Amelidis (Bournemouth University, UK)<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;3&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; header_5_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5 style=\"text-align: justify;\"><strong>The GENEA Workshop 2022: The 3<sup style=\"font-size: 10px!important;\">rd<\/sup>Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents<\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><\/h5>\n<h5 style=\"text-align: justify;\"><b>Workshop summary<\/b><\/h5>\n<p style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Embodied Social Artificial Intelligence in the form of conversational virtual humans and social robots are becoming key aspects of human-machine interaction. For several decades, researchers from varying fields such as human-computer interaction and robotics, have been proposing methods and models to generate non-verbal behaviour for conversational agents in the form of facial expressions, gestures, and gaze. This workshop aims at bringing together these researchers. The aim of the workshop is to stimulate discussions on how to improve both generation methods and the evaluation of the results, and spark an exchange of ideas and cue possible collaborations.\u00a0<\/span><\/p>\n<h5 style=\"text-align: justify;\"><\/h5>\n<h5 style=\"text-align: justify;\"><b>Workshop page<\/b><\/h5>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/genea-workshop.github.io\/2022\/workshop\"><span style=\"font-weight: 400;\">https:\/\/genea-workshop.github.io\/2022\/workshop<\/span><\/a><\/p>\n<h5 style=\"text-align: justify;\"><\/h5>\n<h5 style=\"text-align: justify;\"><b>Organisers<\/b><\/h5>\n<ul>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Pieter Wolfert MSc, IDLab Ghent University &#8211; imec, Ghent Belgium\u00a0<\/span><\/li>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Dr. Taras Kucherenko, SEED &#8211; Electronic Arts, Stockholm Sweden<\/span><\/li>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Dr. Youngwoo Yoon, ETRI, South Korea<\/span><\/li>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Dr. Zerrin Yumak, Utrecht University, Utrecht, The Netherlands<\/span><\/li>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Dr. Gustav Eje Henter, KTH Royal Institute of Technology, Stockholm, Sweden<\/span><\/li>\n<li style=\"text-align: justify;\"><span style=\"font-weight: 400;\">Carla Viegas, CMU, USA<\/span><\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;4&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; header_5_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5 style=\"text-align: justify;\"><strong>2<sup style=\"font-size: 10px!important;\">nd<\/sup> International Workshop on Deep Video Understanding<\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Workshop Summary<\/strong><\/h5>\n<p style=\"text-align: justify;\">Deep video understanding is a difficult task which requires systems to develop a deep analysis and understanding of the relationships between different entities in video, to use known information to reason about other, more hidden information, and to populate a knowledge graph (KG) with all acquired information. To work on this task, a system should take into consideration all available modalities (speech, image\/video, and in some cases text). The aim of this workshop is to push the limits of multimodal extraction, fusion, and analysis techniques to address the problem of analysing long duration videos holistically and extracting useful knowledge to utilize it in solving different types of queries. The target knowledge includes both visual and non-visual elements. As videos and multimedia data are getting more and more popular and usable by users in different domains, the research, approaches and techniques we aim to be applied in this workshop will be very relevant in the coming years and near future.<\/p>\n<h5 style=\"text-align: justify;\"><strong>Workshop Page<\/strong><\/h5>\n<p style=\"text-align: justify;\"><a href=\"https:\/\/sites.google.com\/view\/dvu2022-workshop\" target=\"_blank\" rel=\"noopener\">https:\/\/sites.google.com\/view\/dvu2022-workshop<\/a><\/p>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Workshop <\/strong><strong>Orgainizers<\/strong><\/h5>\n<ul>\n<li style=\"text-align: justify;\">Keith Curtis, National Institute of Standards and Technology, USA<\/li>\n<li style=\"text-align: justify;\">George Awad,National Institute of Standards and Technology, USA<\/li>\n<li style=\"text-align: justify;\">Shahzad Rajput, Georgetown University &amp;National Institute of Standards and Technology, USA<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;5&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; header_5_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5 style=\"text-align: justify;\"><strong>MSECP-Wild: The 4<sup style=\"font-size: 10px!important;\">th<\/sup> Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild<\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Workshop Summary<\/strong><\/h5>\n<p style=\"text-align: justify;\">The ability to automatically infer relevant aspects of human users&#8217; thoughts and feelings is crucial for technologies to adapt their behaviors in complex interactions intelligently (e.g., for user models in social robots or tutoring systems). Research on multimodal analysis of behavioral and physiological data has demonstrated the potential for estimates of a broad range of internal states and processes, such as a person&#8217;s mood or attentional engagement. However, despite progress, constructing robust enough models for deployment in real-world applications remains an open problem. The MSECP-Wild workshop serves as a multidisciplinary forum to present and discuss research on addressing this challenge. It is a concerted effort to stimulate joint research projects, exchange methods and critically discuss current and future investigations. We welcome the presentation of evaluation studies, theoretical considerations, data corpora, and novel modeling approaches. In this iteration, the workshop focuses particularly on addressing variations in contextual conditions as a challenge for accurate predictions of internal states and processes, notably within social settings (e.g., conversations in a group). Submissions specifically relating to this topic will be given priority for presentation. Similarly, we encourage all submissions to reflect on context-related challenges\/limitations in their work.<\/p>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Workshop page<\/strong><\/h5>\n<p style=\"text-align: justify;\"><a href=\"http:\/\/msecp-wild.ewi.tudelft.nl\/\" target=\"_blank\" rel=\"noopener\">http:\/\/msecp-wild.ewi.tudelft.nl\/<\/a><\/p>\n<h5 style=\"text-align: justify;\"><strong><\/strong><\/h5>\n<h5 style=\"text-align: justify;\"><strong>Workshop Organizers<\/strong><\/h5>\n<ul>\n<li style=\"text-align: justify;\">Bernd Dudzik (Delft University of Technology)<\/li>\n<li style=\"text-align: justify;\">Dennis K\u00fcster (University of Bremen)<\/li>\n<li style=\"text-align: justify;\">David St-Onge (\u00c9cole de Technologie Sup\u00e9rieure)<\/li>\n<li style=\"text-align: justify;\">Felix Putze (University of Bremen)<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;6&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; header_5_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>3<sup style=\"font-size: 10px!important;\">rd<\/sup> Workshop on Social Affective Multimodal Interaction for Health (SAMIH)<\/strong><br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><strong>Workshop Summary<\/strong><\/h5>\n<div>This workshop is looking for works describing how interactive, multimodal technology such as virtual agents can be used in social skills training for\u00a0measuring and training social-affective interactions. Sensing technology now enables analyzing user\u2019s behaviors and physiological signals (heart-rate, EEG, etc,). Various signal processing and machine learning methods can be used for such prediction tasks. Beyond sensing, it is also important\u00a0to analyze human behaviors and model and implement training methods (e.g. by virtual agents, social robots, relevant scenarios, design appropriate\u00a0and personalized feedback about social skills performance). Such social signal processing and tools can be applied to measure and reduce social\u00a0stress in everyday situations, including public speaking at schools and workplaces. Target populations include depression, Social Anxiety Disorder\u00a0(SAD), Schizophrenia, Autism Spectrum Disorder (ASD), but also a much larger group of different social pathological phenomena.<br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><\/div>\n<h5><strong>Workshop Page<\/strong><\/h5>\n<p><a href=\"https:\/\/ahcweb01.naist.jp\/samih2022\/index.html\" target=\"_blank\" rel=\"noopener noreferrer\" data-auth=\"NotApplicable\" shape=\"rect\" data-linkindex=\"0\">https:\/\/ahcweb01.naist.jp\/samih2022\/index.html<\/a><br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><strong><\/strong><\/p>\n<h5><strong>Workshop Organizers<\/strong><\/h5>\n<ul>\n<li>Hiroki Tanaka (Nara Institute of Science and Technology, Japan)<\/li>\n<li>Satoshi Nakamura (Nara Institute of Science and Technology, Japan)<\/li>\n<li>Kazuhiro Shidara (Nara Institute of Science and Technology, Japan)<\/li>\n<li>Jean-Claude Martin (CNRS-LISN, Universit\u00e9 Paris Saclay, France)<\/li>\n<li>Catherine Pelachaud (CNRS-ISIR, Sorbonne University, France)<\/li>\n<\/ul>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;7&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_5_text_color=&#8221;#054687&#8243; header_5_line_height=&#8221;1.3em&#8221; custom_margin=&#8221;||25px|||&#8221; border_width_bottom=&#8221;1px&#8221; border_color_bottom=&#8221;#a5a5a5&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>3<sup style=\"font-size: 10px!important;\">rd<\/sup>\u00a0Workshop on Bridging Social Sciences and AI for Understanding Child Behavior<br \/><\/strong><br aria-hidden=\"true\" \/><strong>Workshop Summary<\/strong><\/h5>\n<div>Child behaviour is a topic of wide scientific interest, among many different disciplines including social and behavioural sciences and artificial intelligence (AI). Yet, knowledge from these different disciplines is not integrated to its full potential, owing to among others the dissemination of knowledge in different outlets (journals, conferences) and different practices. In this workshop, we aim to connect these fields and fill the gaps between science and technology capabilities to address topics such as: using AI (e.g. audio, visual, textual signal processing and machine learning) to better understand and model child behavioural and developmental processes, challenges and opportunities in large-scale child behaviour analysis, implementing explainable ML\/AI on sensitive child data, etc. We also welcome contributions on new child-behaviour related multimodal corpora and preliminary experiments on them.<br aria-hidden=\"true\" \/><br aria-hidden=\"true\" \/><\/div>\n<h5><strong>Workshop Page<\/strong><\/h5>\n<p><a href=\"https:\/\/sites.google.com\/view\/wocbu\/\">https:\/\/sites.google.com\/view\/wocbu\/<\/a><\/p>\n<h5><strong><\/strong><\/h5>\n<h5><strong><\/strong><\/h5>\n<h5><strong>Workshop Organizers<\/strong><\/h5>\n<ul>\n<li>Heysem Kaya, Utrecht University, the Netherlands<\/li>\n<li>Anika van der Klis, Utrecht University, the Netherlands<\/li>\n<li>Maryam Najafian, MIT, United States<\/li>\n<li>Saeid Safavi, University of Surrey, United Kingdom<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Workshops Workshop on Multimodal Affect and Aesthetic Experience The GENEA Workshop 2022: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents 2nd International Workshop on Deep Video Understanding MSECP-Wild: The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild 3rd Workshop on Social Affective Multimodal Interaction for Health [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2022\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-368","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/pages\/368","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/comments?post=368"}],"version-history":[{"count":19,"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/pages\/368\/revisions"}],"predecessor-version":[{"id":645,"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/pages\/368\/revisions\/645"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2022\/wp-json\/wp\/v2\/media?parent=368"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}