{"id":740,"date":"2022-09-19T11:49:29","date_gmt":"2022-09-19T06:19:29","guid":{"rendered":"https:\/\/icmi.acm.org\/2022\/?page_id=740"},"modified":"2023-10-10T13:33:38","modified_gmt":"2023-10-10T08:03:38","slug":"program","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2023\/program\/","title":{"rendered":"Program"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||5px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; custom_padding=&#8221;4px||6px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#085593&#8243; text_orientation=&#8221;justified&#8221; custom_margin=&#8221;||-10px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h3><strong>ICMI 2023 Conference Program<\/strong><\/h3>\n<div>\n<div dir=\"ltr\">\n<div id=\"x_x_m_3088732876422127253gmail-:mg\" aria-label=\"Corpo da Mensagem\" role=\"textbox\">\n<p><em><strong>Please note that some changes can still happen due to unforeseen circumstances.<\/strong><\/em><\/p>\n<h4><strong><\/strong><\/h4>\n<h4><strong>Program at a glance<\/strong><strong><\/strong><\/h4>\n<h5><strong>Workshops and Tutorials<\/strong><\/h5>\n<p>Each event will start at 9:00 at the earliest and will end at 18:00 at the latest. The detailed schedule for each event can be found on their respective websites.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/icmi.acm.org\/2023\/wp-content\/uploads\/2023\/10\/Icmi_Monday_v2.png\" width=\"555\" height=\"1125\" alt=\"\" class=\"wp-image-1569 alignnone size-full\" \/><\/p>\n<p><strong><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/icmi.acm.org\/2023\/wp-content\/uploads\/2023\/10\/ICMI_Friday_v2.png\" width=\"555\" height=\"1037\" alt=\"\" class=\"wp-image-1571 alignnone size-full\" \/><\/strong><\/p>\n<h5><strong>Main Conference<\/strong><\/h5>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<p>[\/et_pb_text][et_pb_image src=&#8221;https:\/\/icmi.acm.org\/2023\/wp-content\/uploads\/2023\/09\/ProgramGlance-1-1.png&#8221; title_text=&#8221;ProgramGlance-1&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][\/et_pb_image][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;46.4px&#8221; custom_margin=&#8221;10px||-15px||false|false&#8221; custom_padding=&#8221;||7px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h4><strong>Detailed Program<\/strong><\/h4>\n<p>[\/et_pb_text][et_pb_text _builder_version=&#8221;4.14.4&#8243; _dynamic_attributes=&#8221;link_option_url&#8221; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;46.4px&#8221; custom_margin=&#8221;10px||-15px||false|false&#8221; custom_padding=&#8221;||7px|||&#8221; link_option_url=&#8221;@ET-DC@eyJkeW5hbWljIjp0cnVlLCJjb250ZW50IjoicG9zdF9saW5rX3VybF9wYWdlIiwic2V0dGluZ3MiOnsicG9zdF9pZCI6IjkyNSJ9fQ==@&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Doctoral Consortium (<\/strong><strong>Monday, 09 October 2023)<\/strong><\/h5>\n<p>[\/et_pb_text][et_pb_text module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;29px&#8221; custom_margin=&#8221;10px||-15px||false|false&#8221; link_option_url=&#8221;#tuesday10&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Tuesday, 10 October 2023<\/strong><\/h5>\n<p>[\/et_pb_text][et_pb_text module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;46.4px&#8221; custom_margin=&#8221;20px||-15px||false|false&#8221; link_option_url=&#8221;#wednesday11&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Wednesday, 11 October 2023<\/strong><\/h5>\n<p>[\/et_pb_text][et_pb_text module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;48.4px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; link_option_url=&#8221;#thursday12&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Thursday, 12 October 2023<\/strong><\/h5>\n<p>[\/et_pb_text][et_pb_text module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;48.4px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; link_option_url=&#8221;#papersnotpresented&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Papers not presented in-person<\/strong><\/h5>\n<p>[\/et_pb_text][et_pb_divider color=&#8221;#000000&#8243; divider_weight=&#8221;3px&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; width=&#8221;100%&#8221; custom_margin=&#8221;15px||15px||false|false&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][\/et_pb_divider][et_pb_text module_id=&#8221;tuesday10&#8243; module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;3242.1px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Tuesday, 10 October<\/strong><\/h5>\n<p><strong><i><span style=\"font-weight: 400;\">All sessions will take place in the <\/span><\/i><b><i>Auditorium, Sorbonne University International Conference Centre <\/i><\/b><i><span style=\"font-weight: 400;\">except for the Poster Session that will be in the <\/span><\/i><b><i>Foyer of the Auditorium, Sorbonne University International Conference Centre<\/i><\/b><\/strong><\/p>\n<table border=\"1\" style=\"width: 100%; border-collapse: collapse; border-style: none; float: left; height: 58px;\">\n<tbody>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">09:00-09:15<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Welcome<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">ICMI 2023 General Chairs<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">09:15-10:15<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Keynote 1: Multimodal information processing in communication: the nature of faces and voices<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #3366ff;\"><strong><i>Prof. Sophie Scott<\/i><\/strong><\/span><br \/>\n<span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Louis-Philippe Morency<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:15-10:45<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:45-12:05<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Oral Session 1: Social and Physiological Signals<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Zakia Hammal<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">10:45-11:05\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">EEG-based Cognitive Load Classification using Feature Masked Autoencoding and Emotion Transfer Learning<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">D.Pulver, P.Angka, P.Hungler and A.Etemad<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:05-11:25\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Representation Learning for Interpersonal and Multimodal Behavior Dynamics: A Multiview Extension of Latent Change Score Models<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Vail, J.M.Girard,L.Bylsma, J.Fournier, H.Swartz, J.Cohn and L.-P.Morency<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:25-11:45\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Crucial Clues: Investigating Psychophysiological Behaviors for Measuring Trust in Human-Robot Interaction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Ahmad and A.Alzahrani<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:45-12:05\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Understanding the Social Context of Eating with Multimodal Smartphone Sensing: The Role of Country Diversity <span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">N.D.Kammoun, L.Meegahapola and D.Gatica-Perez<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">12:05-14:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Lunch<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">14:00-15:20<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Oral Session 2: Bias and Diversity<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Chlo\u00e9 Clavel<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:00-14:20\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Using Explainability for Bias Mitigation: A Case Study for Fair Recruitment Assessment<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">G.Sogancioglu, H.Kaya and A.A.Salah<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:20-14:40\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Bias: Assessing Gender Bias in Computer Vision Models with NLP Techniques<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A. Mandal, S.Little and S.Leavy<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:40-15:00\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Recognizing Intent in Collaborative Manipulation<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Z.Rysbek, K-H.Oh and M.Zefran<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">15:00-15:20\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Evaluating Outside the Box: Lessons Learned on eXtended Reality Multi-modal Experiments Beyond the Laboratory<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">B.Marques, S.Silva, R.Maio, J.Alves, C.Ferreira, P.Dias, B.Sousa Santos<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:20-15:50<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:20-17:20<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Poster Session 1 (including Doctoral Consortium posters)<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Analyzing and Recognizing Interlocutors&#8217; Gaze Functions from Multimodal Nonverbal Cues<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Tashiro, M.Imamura, S.Kumano and K.Otsuka<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Fusion Interactions: A Study of Human and Automatic Quantification<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">P.P.Liang, Y.Cheng, R.Salakhutdinov and L.-P.Morency<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">HIINT: Historical, Intra- and Inter- personal Dynamics Modeling with Cross-person Memory Transformer<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Kim, D.W.Lee, P.P.Liang, S.Alghowinem, C.Breazeal and H.W.Park<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Deciphering Entrepreneurial Pitches: A Multimodal Deep Learning Approach to Predict Probability of Investment<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">P.van Aken, M.M.Jung, W.Liebregts and I.O.Ertugrul<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Identifying Interlocutors&#8217; Behaviors and its Timings Involved with Impression Formation from Head-Movement Features and Linguistic Features<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Otsuchi, K.Ito, Y.Ishii, R.Ishii, S.Eitoku and K.Otsuka<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Evaluating the Potential of Caption Activation to Mitigate Confusion Inferred from Facial Gestures in Virtual Meetings<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Heck, J.Jeong and C.Becker<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Towards Autonomous Physiological Signal Extraction From Thermal Videos Using Deep Learning<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Das, M.Abouelenien, M.G.Burzo, J.Elson, K.Prakah-Asante and C.Maranville<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Exploring Feedback Modality Designs to Improve Young Children&#8217;s Collaborative Actions<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Melniczuk and E.Vrapi<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Breathing New Life into COPD Assessment: Multisensory Home-monitoring for Predicting Severity<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Z.Xiao, M.Muszynski, R.Marcinkevi\u010ds, L.Zimmerli, A.D.Ivankay, D.Kohlbrenner, M.Kuhn, Y.Nordmann, U.Muehlner, C.Clarenbach,J.E.Vogt and T.Brunschwiler<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Analyzing Synergetic Functional Spectrum from Head Movements and Facial Expressions in Conversations<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Imamura, A.Tashiro, S.Kumano and K.Otsuka<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Do I Have Your Attention: A Large Scale Engagement Prediction Dataset and Baselines<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Singh, X.Hoque, D.Zeng, Y.Wang, K.Ikeda and A.Dhall<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Implicit Search Intent Recognition using EEG and Eye Tracking: Novel Dataset and Cross-User Prediction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Sharma, S.Chen, P.M\u00fcller, M.Rekrut and A.Kr\u00fcger<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Analysis and Assessment of Therapist Empathy in Motivational Interviews<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Tran, Y.Yin, L.Tavabi, J.Delacruz, B.Borsari, J.D..Woolley, S.Scherer and M.Soleymani<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Turn Analysis and Prediction for Multi-party Conversations<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M-C.Lee, M.Trinh and Z.Deng<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Explainable Depression Detection via Head Motion Patterns<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Gahalawat, R.Fernandez Rojas, T.Guha, R.Subramanian, R.Goecke<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Early Classifying Multimodal Sequences<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Cao, J.Utke and D.Klabjan<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Predicting Player Engagement in Tom Clancy&#8217;s The Division 2: A Multimodal Approach via Pixels and Gamepad Actions<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Pinitas, D.Renaudie, M.Thomsen, M.Barthet, K.Makantasis, A.Liapis and G.Yannakakis<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">On Head Motion for Recognizing Aggression and Negative Affect during Speaking and Listening<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Fitrianie and I.Lefter<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">SHAP-based Prediction of Mother&#8217;s History of Depression to Understand the Influence on Child Behavior<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Bilalpur, S.Hinduja, L.Cariola, L.Sheeber, N.Allen, L-P. Morency, and. J.Cohn<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Computational analyses of linguistic features with schizophrenic and autistic traits along with formal thought disorders<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Saga, H.Tanaka and S.Nakamura<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Acoustic and Visual Knowledge Distillation for Contrastive Audio-Visual Localization<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">E.Yaghoubi, A.P.Kelm, T.Gerkmann and S.Frintrop<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Performance Exploration of RNN Variants for Recognizing Daily Life Stress Levels by Using Multimodal Physiological Signals<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Said Ca and, E.Andr\u00e9<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Enhancing Resilience to Missing Data in Audio-Text Emotion Recognition with Multi-Scale Chunk Regularization<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">W-C.Lin, L.Goncalves and C.Busso<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Interpreting Sign Language Recognition using Transformers and MediaPipe Landmarks<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">C.Luna-Jim\u00e9nez, M.Gil-Mart\u00edn, R.Kleinlein, R.San-Segundo and F.Fern\u00e1ndez-Mart\u00ednez<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Expanding the Role of Affective Phenomena in Multimodal Interaction Research<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">L.Mathur, M.Mataric and L.-P.Morency<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:20-17:20<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Doctoral Consortium posters<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Smart Garments for Immersive Home Rehabilitation Using VR<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">L.A.Magre<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Crowd Behavior Prediction Using Visual and Location Data un Super-Crowded Scenarios<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.B.M.Wijaya<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Recording Multimodal Pair-Programming Dialogue for Reference Resolution by Conversational Agents<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">C.Domingo<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Modeling Social Cognition and Its Neurologic Deficits with Artificial Neural Networks<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">L.P.Mertens<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Come Fl.. Run with me: Understanding the Utilization of Drones to Support Recreational Runner\u2019s Well Being<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Balasubramaniam<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Conversational Grounding in Multimodal Dialog Systems<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">B.Mohapatra<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Explainable Depression Detection using Multimodal Behavioural Cues<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Gahalawat<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Enhancing Surgical Team Collaboration and Situation Awareness Through Multimodal Sensing<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Allemang-Trivalle<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Bridging Multimedia Modalities: Enhanced Multimodal AI Understanding and Intelligent Agents<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Gautam<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\"><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;[\/et_pb_text][et_pb_text module_id=&#8221;wednesday11&#8243; module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;1814px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Wednesday, 11 October<\/strong><\/h5>\n<p><i><span style=\"font-weight: 400;\">All sessions will take place in the <\/span><\/i><b><i>Auditorium, Sorbonne University International Conference Centre, <\/i><\/b><i><span style=\"font-weight: 400;\">except for the Poster session that will be in TBA and Demo Session that will be in <\/span><\/i><b><i>Foyer of the Auditorium, Sorbonne University International Conference Centre<\/i><\/b><\/p>\n<table border=\"1\" style=\"width: 100%; border-collapse: collapse; border-style: none; float: left; height: 58px;\">\n<tbody>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">09:15-10:15<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"font-weight: 400;\"><b>Keynote 2: A Robot Just for You: Multimodal Personalized Human-Robot Interaction and the Future of Work and Care<\/b><br \/>\n<\/span><\/span><span style=\"color: #3366ff;\"><strong><i>Prof. Maja Mataric<\/i><\/strong><\/span><br \/>\n<span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Tanja Schultz<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:15-10:45<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:45-12:05<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Oral Session 3: Affective Computing<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Dirk Heylen<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">10:45-11:05\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Neural Mixed Effects for Nonlinear Personalized Predictions<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.W\u00f6rtwein, N.Allen, L.Sheeber, R.Auerbach, J.Cohn and L.-P.Morency<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:05-11:25\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Detecting When the Mind Wanders Off Task in Real-time: An Overview and Systematic Review<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">V.Kuvar, J.W.Y.Kam, S. Hutt and C.Mills<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:25-11:45\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Annotations from speech and heart rate: impact on multimodal emotion recognition<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Sharma and G.Chanel<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:45-12:05\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Toward Fair Facial Expression Recognition with Improved Distribution Alignment<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Kolahdouzi and A.Etemad<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">12:05-14:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Lunch<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">14:00-15:20<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Oral Session 4: Multimodal Interfaces<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Sean Andrist<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:00-14:20\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Ether-Mark: An Off-Screen Marking Menu For Mobile Devices<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">H.Rateau, Y.Rekik and E.Lank<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:20-14:40\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Embracing Contact: Detecting Parent-Infant Interactions<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Doyran, R.Poppe and A.Ali Salah<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">14:40-15:00\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Cross-Device Shortcuts: An Interaction Technique that Creates Deep Links between Apps Across Devices for Content Transfer<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Beyeler, Y.F.Cheng and C.Holz<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">15:00-15:20\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Component attention network for multimodal dance improvisation recognition<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">J. Fu, J. Tan, W. Yin, S. Pashami, and M. Bj\u00f6rkman<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:20-15:40<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Challenge Overview Talks<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\"><\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:40-16:10<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><em>Overlapping with the poster session<\/em><i><span style=\"font-weight: 400;\"><\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:40-17:40<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Poster Session 2 (and Demo Session)<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">TongueTap: Multimodal Tongue Gesture Recognition with Head-Worn Devices<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Gemicioglu, R.Michael Winters, Y-T.Wang,T.Gable, I.J.Tashev<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Using Augmented Reality to Assess the Role of Intuitive Physics in the Water-Level Task<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">R.Abadi, LM.Wilcox and R.Allison<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Classification of Alzheimer&#8217;s Disease with Deep Learning on Eye-tracking Data<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">H.Sriram, C.Conati and T.Field<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Video-based Respiratory Waveform Estimation in Dialogue: A Novel Task and Dataset for Human-Machine Interaction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Obi and K.Funakoshi<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">The Role of Audiovisual Feedback Delays and Bimodal Congruency for Visuomotor Performance in Human-Machine Interaction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Dix,C.Sabrina and A.M.Harkin<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Can empathy affect the attribution of mental states to robots?<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">C.Gena, F.Manini, A.Lieto, A.Lillo and F.Vernero<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">AIUnet: Asymptotic inference with U2-Net for referring image segmentation<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Heck, J.Jeong and C.Becker<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Using Speech Patterns to Model the Dimensions of Teamness in Human-Agent Teams<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">E.Doherty, C.Spencer, L.Eloy, N.R.Dickler and L.Hirshfield<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Robot Duck Debugging: Can Attentive Listening Improve Problem Solving?<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.T.Parreira, S.Gillet and I.Leite<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Estimation of Violin Bow Pressure Using Photo-Reflective Sensors<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Mizuho and R.Kitamura and Y.Sugiurar<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Paying Attention to Wildfire: Using U-Net with Attention Blocks on Multimodal Data for Next Day Prediction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">J.Fitzgerald,E.Seefried, J.E.Yost, S.Pallickara and N.Blanchard<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">ReNeLiB: Real-time Neural Listening Behavior Generation for Socially Interactive Agents<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">D.S.Withanage Don, P.M\u00fcller, F.Nunnari, E.Andr\u00e9 and P.Gebhard<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Large language models in textual analysis for gesture selection<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">L.Birka, N.Yongsatianchot, P.G.Torshizi, E.Minucci and S.Marsella<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Increasing Heart Rate and Anxiety Level with Vibrotactile and Audio Presentation of Fast Heartbeat<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">R.Wang, H.Zhang, S.A.Macdonald, P.Di Campli San Vito<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">User Feedback-based Online Learning for Intent Classification<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.G\u00f6n\u00e7, B.Sa\u011flam, O.Dalmaz, T.\u00c7ukur, S.Kozat and H.Dibeklioglu<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">\u00b5GeT: Multimodal eyes-free text selection technique combining touch interaction and microgestures<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">G.R.J.Faisandaz, A.Goguey, C.Jouffrais and L.Nigay<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Deep Breathing Phase Classification with a Social Robot for Mental Health<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Matheus, E.Mamantov, M.V\u00e1zquez and B.Scassellati<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">ASMRcade: Interactive Audio Triggers for an Autonomous Sensory Meridian Response<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Mertes, M.Strobl, R.Schlagowski and E. Andr\u00e9<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Augmented Immersive Viewing and Listening Experience Based on Arbitrarily Angled Interactive Audiovisual Representation<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Horiuchi, S.Okuba and T.Kobayashi<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Out of Sight, &#8230; How Asymmetry in Video-Conference Affects Social Interaction<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">C.Sallaberry, G.Englebienne, J.Van Erp and V.Evers<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\"><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Demo Session<\/b><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;[\/et_pb_text][et_pb_text module_id=&#8221;thursday12&#8243; module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;2664.3px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Thursday, 12 October<\/strong><\/h5>\n<p><i><span style=\"font-weight: 400;\">All sessions will take place in the <\/span><\/i><b><i>Auditorium, Sorbonne University International Conference Centre <\/i><\/b><i><span style=\"font-weight: 400;\">except for the Poster Session that will be in the <\/span><\/i><b><i>Foyer of the Auditorium, Sorbonne University International Conference Centre<\/i><\/b><\/p>\n<table border=\"1\" style=\"width: 100%; border-collapse: collapse; border-style: none; float: left; height: 58px;\">\n<tbody>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">09:15-10:15<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"font-weight: 400;\"><b>Keynote 3: Projecting Life Onto Machines<\/b><br \/><\/span><\/span><span style=\"color: #3366ff;\"><strong><i>Prof. Simone Natale<\/i><\/strong><\/span><br \/><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Alessandro Vinciarelli<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:15-10:45<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">10:45-12:05<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Oral Session 5: Gestures and Social Interactions<\/b><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: Mohammad Soleymani<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">10:45-11:05\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">AQ-GT: a Temporally Aligned and Quantized GRU-Transformer for Co-Speech Gesture Synthesis<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">H.Vo\u00df and S.Kopp<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:05-11:25\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Frame-Level Event Representation Learning for Semantic-Level Generation and Editing of Avatar Motion<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Ideno, T.Kaneko and T.Harada<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:25-11:45\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">FaceXHuBERT: Text-less Speech-driven E(X)pressive 3D Facial Animation Synthesis Using Self-Supervised Speech Representation Learning<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.I.Haque and Z.Yumak<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\">11:45-12:05\u00a0\u00a0<\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Influence of hand representation on a grasping task in augmented reality<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">L.Lafuma, G.Bouyer, O.Goguel and J.-Y.P.Didier<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">12:05-14:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Lunch<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">14:00-15:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Keynote 4 &#8211; Sustained Achievement Award<\/b><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\"><span style=\"color: #3366ff;\"><strong>Prof. Louis-Philippe Morency<\/strong><\/span><\/span><\/i><\/span><br \/><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:00-15:30<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Break<\/b><\/span><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><em>Overlapping with Poster Session 3<\/em><i><span style=\"font-weight: 400;\"><\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">15:00-16:45<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Poster Session 3 and Late Breaking Results<\/b><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Session Chair: TBA<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Synerg-eye-zing: Decoding Nonlinear Gaze Dynamics Driving Successful Collaborations in Co-located Teams<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">G.S.Rajshekar, L.Eloy, R.Dickler, J.G.Reitman, S.L.Pugh, P.Foltz, J.C.Gorman, J.Harrison and L. Hirshfield<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Exploring Neurophysiological Responses to Cross-Cultural Deepfake Videos<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.R.Khan, S.Naeem, U.Tariq, A.Dhall, M.N.A.Khan, F.Al Shargie and H.Al Nashash<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Characterization of collaboration in a virtual environment with gaze and speech signals<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.L\u00e9chapp\u00e9, A.Milliat, C.Fleury, M.Chollet and C.Dumas<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">HEARD-LE: An Intelligent Conversational Interface for Wordle<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">C.Yang, K.Arredondo, J.I.Koh, P.Taele and T.Hammond<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Assessing Infant and Toddler Behaviors through Wearable Inertial Sensors: A Preliminary Investigation<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Onodera, R.Ishioka, Y.Nishiyama and K.Sezaki<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">T.Ahmed, T.Rikakis, A.Kelliher and M.Soleymani<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">The Limitations of Current Similarity-Based Objective Metrics In the Context of Human-Agent Interaction Applications<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Deffrennes, L.Vincent, M.Pivette, K.El Haddad, J.D.Bailey, M.Perusquia-Hernandez, S.M.Alarc\u00e3o and T.Dutoit<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Do Body Expressions Leave Good Impressions? &#8211; Predicting Investment Decisions based on Pitcher&#8217;s Body Expressions<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.M.Jung, M.van Vlierden, W.Liebregts and I.Onal Eturgul<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Song and S.Di Paola<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual Cues<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Chakraborty and J.Timoney<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public Speaking<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Barkar, M.Chollet, B.Biancardi and C.Clavel<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Detection of contract cheating in pen-and-paper exams through the analysis of handwriting style<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Kunzentsov, M.Barz and D.Sonntag<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Leveraging gaze for potential error prediction in AI-support systems: An exploratory analysis of interaction with a simulated robot<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">B.Severitt, N.J.Castner, O.Lukashova-Sanz and S.Wahl<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Developing a Generic Focus Modality for Multimodal Interactive Environments<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">F.Barros, A.Teixeira and S.Silva<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Prediction of User&#8217;s Performance in High-Stress Dialogue Interactions<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Nasihati Gilani, K.Pollard and D.Traum<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Understanding the Physiological Arousal of Novice Performance Drivers for the Design of Intelligent Driving Systems<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">E.Kimani, A.L.S.Filipowicz and H.Yasuda<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">A Portable Ball with Unity-based Computer Game for Interactive Arm Motor Control Exercise<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Zhou, Y.An, Q.Niu, Q.Bu, Y.C.Liang, M.Leach and J.Sun<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Virtual Reality Music Instrument Playing Game for Upper Limb Rehabilitation Training<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">M.Sun, Q.Bu, Y.Hou, X.Ju, L.Yu, E.G.Lim and J.Sun<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">K.Inoue, D.Lala, K.Ochi, T.Kawahara and G.Skantze<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">&#8220;Am I listening?&#8221;, Evaluating the Quality of Generated Data-driven Listening Motion<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">P.Wolfert, G.E.Henter and T.Belpaeme<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; text-align: right; vertical-align: top; border-style: none;\"><span style=\"color: #808080;\"><\/span><\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">LinLED: Low latency and accurate contactless gesture interaction<span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Viollet, C.Martin and J.-M.Ingargiola<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">16:45-17:45<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Blue Sky Papers<\/b><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\"><\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">17:45-18:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><b>Closing<\/b><span style=\"font-weight: 400;\"><br \/><\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\"><\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 19.593%; height: 48px; vertical-align: top; border-style: none;\">19:00-22:00<\/td>\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #333399;\"><span style=\"color: #339966;\"><b>Banquet, Le Grand Salon, La Sorbonne, La Chancellerie des Universit\u00e9s de Paris<\/b><\/span><span style=\"font-weight: 400;\"><\/span><\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;papersnotpresented&#8221; module_class=&#8221;open-toggle&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#000000&#8243; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#072f93&#8243; header_5_text_color=&#8221;#013e84&#8243; text_orientation=&#8221;justified&#8221; min_height=&#8221;832.6px&#8221; custom_margin=&#8221;-16px||-15px||false|false&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h5><strong>Papers Not Presented In-person<\/strong><\/h5>\n<p><i><span style=\"font-weight: 400;\">This is a list of papers for which no authors were able to attend the conference in person. While these papers do not appear in the program above, they are still available in the conference proceedings. Optionally, authors were invited to submit a pre-recorded video presentation of their paper, and submit it as supplementary material, accompanying the conference proceedings.<\/span><\/i><\/p>\n<p><i><\/i><\/p>\n<table border=\"1\" style=\"width: 100%; border-collapse: collapse; border-style: none; float: left; height: 58px;\">\n<tbody>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">MMASD: A Multimodal Dataset for Autism Intervention Analysis<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">J.Li, V.Chheang, P.Kullu, Z.Guo, A.Bhat, K.E.Barner and R.L.Barmaki<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">GCFormer: A Graph Convolutional Transformer for Speech Emotion Recognition<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Gao, H.Zhao, Y.Xiao and Z.Zhang<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">How Noisy is Too Noisy? The Impact of Data Noise on Multimodal Recognition of Confusion and Conflict During Collaborative Learning<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Ma, M.Celepkolu, K.E.Boyer, C.Lynch, E.Wiebe and M.Israel<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch to Portrait Generation<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">Y.Sun, Q.Wu, H.Zhou, K.Wang, T.Hu, C.-C.Liao, S.Miyafuji, Z.Liu and H.Koike<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Gait Event Prediction of People with Cerebral Palsy using Feature Uncertainty: A Low-Cost Approach<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">S.Chakraborty, N.Thomas and A.Nandy<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">ViFi-Loc: Multi-modal Pedestrian Localization using GAN with Camera-Phone Correspondences<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">H.Liu, H.Lu, K.Dana and M.Gruteser<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">Multimodal Approach to Investigate the Role of Cognitive Workload and User Interfaces in Human-robot Collaboration<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">A.Kalatzis, S.Rahman, V.G.Prabhu, L.Stanley and M.Wittie<\/span><\/i><\/span><\/td>\n<\/tr>\n<tr style=\"height: 48px;\">\n<td style=\"width: 67.5568%; height: 48px; vertical-align: top; border-style: none;\"><span style=\"color: #000000;\">WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and Audio<span style=\"font-weight: 400;\"><br \/>\n<\/span><\/span><span style=\"color: #808080;\"><i><span style=\"font-weight: 400;\">V.K.Singh, P.Kar, A.M.Sohini, M.Rangaiah, S.Chakraborty and M.Maity<\/span><\/i><\/span><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>ICMI 2023 Conference Program Please note that some changes can still happen due to unforeseen circumstances. Program at a glance Workshops and Tutorials Each event will start at 9:00 at the earliest and will end at 18:00 at the latest. The detailed schedule for each event can be found on their respective websites. Main Conference [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2023\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-740","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/pages\/740","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/comments?post=740"}],"version-history":[{"count":90,"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/pages\/740\/revisions"}],"predecessor-version":[{"id":1583,"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/pages\/740\/revisions\/1583"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2023\/wp-json\/wp\/v2\/media?parent=740"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}