{"id":1990,"date":"2024-09-30T21:50:35","date_gmt":"2024-09-30T16:20:35","guid":{"rendered":"https:\/\/icmi.acm.org\/2024\/?page_id=1990"},"modified":"2024-10-25T20:14:45","modified_gmt":"2024-10-25T14:44:45","slug":"sessions","status":"publish","type":"page","link":"https:\/\/icmi.acm.org\/2024\/sessions\/","title":{"rendered":"Sessions"},"content":{"rendered":"<p>[et_pb_section fb_built=&#8221;1&#8243; admin_label=&#8221;section&#8221; _builder_version=&#8221;4.14.4&#8243; background_enable_image=&#8221;off&#8221; custom_padding=&#8221;3px||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_row admin_label=&#8221;row&#8221; _builder_version=&#8221;4.14.4&#8243; background_size=&#8221;initial&#8221; background_position=&#8221;top_left&#8221; background_repeat=&#8221;repeat&#8221; width=&#8221;90%&#8221; min_height=&#8221;1612.7px&#8221; custom_margin=&#8221;|auto|221px|auto||&#8221; custom_padding=&#8221;4px|||||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_column type=&#8221;4_4&#8243; _builder_version=&#8221;3.25&#8243; custom_padding=&#8221;|||&#8221; global_colors_info=&#8221;{}&#8221; custom_padding__hover=&#8221;|||&#8221; theme_builder_area=&#8221;post_content&#8221;][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; text_font=&#8221;||||||||&#8221; text_text_color=&#8221;#4f4f4f&#8221; text_font_size=&#8221;13px&#8221; header_4_text_color=&#8221;#282562&#8243; header_4_line_height=&#8221;2em&#8221; header_5_text_color=&#8221;#6292C2&#8243; header_5_line_height=&#8221;1.6em&#8221; custom_margin=&#8221;||0px|||&#8221; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<h4><b>Sessions<\/b><\/h4>\n<p>This is a tentative schedule for the Conference Program, we expect to make a few changes soon.<\/p>\n<p>&nbsp;<\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;AEDC&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; min_height=&#8221;1354.4px&#8221; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<strong>Adjunt Events Day 1 08:00-17:30 Doctoral Consortium<\/strong><br \/>\n<em>Session Chair: Yukiko Nakano<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>09:25<\/strong> Opening and Welcome<br \/>\n<em>Speaker: <span style=\"font-weight: 400;\">Micol Spitale<\/span><br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>Session 1: Education<\/strong><br \/>\n<em>Chair: <span style=\"font-weight: 400;\">Micol Spitale<\/span><br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>09:30<\/strong> <span style=\"font-weight: 400;\">Enhancing Collaboration and Performance among EMS Students through Multimodal Learning Analytics<\/span><br \/>\n<em><span style=\"font-weight: 400;\">Vasundhara Joshi<\/span><\/em><br \/>\n<em><span style=\"font-weight: 400;\">Mentor: Catha Oertel<\/span><\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>09:55<\/strong> <span style=\"font-weight: 400;\">Video Game Technologies Applied for Teaching Assembly Language Programming<\/span>\u200b<br \/>\n<em>Ernesto Rivera<\/em><br \/>\n<em><span style=\"font-weight: 400;\">Mentor: Heloisa Candello<\/span><\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:20<\/strong> Coffee Break<\/p>\n<p style=\"padding-left: 40px;\"><strong>Session 2: Robots and Touch<\/strong><br \/>\n<em>Chair: <span style=\"font-weight: 400;\">Micol Spitale<\/span><br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:40<\/strong> <span style=\"font-weight: 400;\">A Musical Robot for People with Dementia<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Paul Raingeard de la Bletiere<\/span><\/em><br \/>\n<em>Mentor: Muneeb Ahmad<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:05<\/strong> <span style=\"font-weight: 400;\">Real-Time Trust Measurement in Human-Robot Interaction: Insights from Physiological Behaviours<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Abdullah Alzahrani<\/span><\/em><br \/>\n<em>Mentor: Sean Andrist<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:30<\/strong> <span style=\"font-weight: 400;\">Designing Digital Multisensory Textile Experiences<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Shu Zhong<\/span><\/em><br \/>\n<em>Mentor: Heloisa Candello<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>12:00 <\/strong>Lunch and Interaction with Mentors<\/p>\n<p style=\"padding-left: 40px;\"><strong>Session 3: Social Interaction Modeling<\/strong><br \/>\n<em>Chair: <span style=\"font-weight: 400;\">Micol Spitale<\/span><br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:30<\/strong> <span style=\"font-weight: 400;\">Towards Automatic Social Involvement Estimation<\/span>\u200b\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Zonghuan Li<\/span><\/em><br \/>\n<em>Mentor: Alessandro Vinciarelli<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:55<\/strong> <span style=\"font-weight: 400;\">Investigating Multi-Reservoir Computing for EEG-based Emotion Recognition<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Anubhav Anubhav<\/span><\/em><br \/>\n<em>Mentor: Albert Ali Salah<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:15<\/strong> <span style=\"font-weight: 400;\">Modelling Social Intentions in Complex Conversational Settings<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Ivan Kondyurin<\/span><\/em><br \/>\n<em>Mentor: Alessandro Vinciarelli<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>Session 4: Multimodal and Trustworthy AI<\/strong><br \/>\n<em>Chair: <span style=\"font-weight: 400;\">Micol Spitale<\/span><br \/>\n<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:40<\/strong><span style=\"font-weight: 400;\">A Multimodal Understanding of the Eye-Mind Link<\/span><em><br \/>\n<span style=\"font-weight: 400;\">Megan Caruso<\/span><\/em><br \/>\n<em>Mentor: Stacy Marsella<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>15:05<\/strong> <span style=\"font-weight: 400;\">Towards Trustworthy and Efficient Diffusion Models<\/span>\u200b<br \/>\n<em><span style=\"font-weight: 400;\">Jayneel Vora<\/span><\/em><br \/>\n<em>Mentor: Albert Ali Salah<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:3<\/strong> <span style=\"font-weight: 400;\">Coffee Break<br \/>\n<\/span><\/p>\n<p style=\"padding-left: 40px;\"><strong>15:45 Early Career Stories: <span style=\"font-weight: 400;\">Mentors share early career stories and advice, students can ask any questions to the mentors from career to work-life balance<\/span><\/strong><br \/>\n<em>Chair: Micol Spitale<\/em><\/span><\/strong><\/p>\n<p style=\"padding-left: 40px;\"><strong><span style=\"font-weight: 400;\"><strong>17:00<\/strong> Closing<\/span><\/strong><\/p>\n<p style=\"padding-left: 40px;\"><strong><span style=\"font-weight: 400;\"><\/span><\/strong><\/p>\n<p style=\"padding-left: 40px;\"><strong><span style=\"font-weight: 400;\"><\/span><\/strong><\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;AEGC1&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Adjunt Events Day 1 13:30-17:30 Grand Challenge 1 (ERR)<\/strong><br \/><em>Session Chair: Maria Teresa Parreira<br \/><\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:30<\/strong> <span style=\"font-weight: 400;\">Introduction to the ERR@HRI challenge and baseline paper presentation<\/span><br \/><em>Maria Teresa Parreira<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:45<\/strong> <span style=\"font-weight: 400;\">A Time Series Classification Pipeline for Detecting Interaction Ruptures in HRI Based on User Reactions<\/span><br \/><i><span style=\"font-weight: 400;\">Peter Tisnikar<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:05<\/strong> <span style=\"font-weight: 400;\">PRISCA at ERR@HRI 2024: Multimodal Representation Learning for Detecting Interaction Ruptures in HRI<\/span><br \/><i><span style=\"font-weight: 400;\">Silvia Rossi<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:25<\/strong> <span style=\"font-weight: 400;\">Predicting Errors and Failures in Human-Robot Interaction from Multi-Modal Temporal Data<\/span><br \/><i><span style=\"font-weight: 400;\">Ruben Janssens<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:45 <\/strong>Coffee Break<\/p>\n<p style=\"padding-left: 40px;\"><strong>15:00<\/strong> <span style=\"font-weight: 400;\">Keynote Speaker: TBD<\/span><br \/><i><span style=\"font-weight: 400;\">Chair: Maia Stiber<br \/><\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\"><strong>15:30<\/strong> <span style=\"font-weight: 400;\">Final discussion: Future Directions<\/span><br \/><i><span style=\"font-weight: 400;\">Chair: Maia Stiber<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\"><strong>16:00<\/strong> Closing<\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;OS1&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 1 10:30-12:00 Oral Session 1: Multimodal and cross-modal learning<\/strong><br \/><em>Session Chair: Yukiko Nakano<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:30-10:50<\/strong> Mitigation of gender bias in automatic facial non-verbal behaviors generation for interactive social agents <strong>nominated for best paper award<\/strong><br \/><em>A. Delbosc, M. Ochs, N. Sabouret, B. Ravenet, and S. Ayache<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:50-11:10<\/strong> DoubleDistillation: Enhancing LLMs for Informal Text Analysis using Multistage Knowledge Distillation from Speech and Text <strong>nominated for best paper award<\/strong><br \/><em>F. Hasan, Y. Li, J. Foulds, S. Pan, B. Bhattacharjee<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:10-11:30<\/strong> Do We Need To Watch It All? Efficient Job Interview Video Processing with Differentiable Masking<br \/><em>H. Le, S. Li, C. O. Mawalim, H. H. Huang, C. W. Leong, and S. Okada<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:30-11:50<\/strong> <span style=\"font-weight: 400;\">A Model of Factors Contributing to the Success of Dialogical Explanations<\/span><br \/><i><span style=\"font-weight: 400;\">M. Booshehri, H. Buschmeier, and P. Cimiano<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;OS2&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 1 13:30-15:00<\/strong> <strong>Oral Session 2: Human Communication Dynamics<\/strong><br \/><em>Session Chair: Hendrik Buschmeier<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:30-13:50<\/strong> Online Multimodal End-of-Turn Prediction for Three-party Conversations <strong>nominated for best paper award<\/strong><br \/><em>M. C. Lee and Z. Deng<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:50-14:10<\/strong> Decoding Contact: Automatic Estimation of Contact Signatures in Parent-Infant Free Play Interactions <strong>nominated for best paper award<\/strong><br \/><em>M. Doyran, A. A. Salah, and R. Poppe<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:10-14:30<\/strong> Leveraging Prosody as an Informative Teaching Signal for Agent Learning: Exploratory Studies and Algorithmic Implications<br \/><em>M. Knierim, S. Jain, M. H. Aydo\u011fan, K. Mitra, K. Desai, A. Saran, and K. Baraka<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:30-14:50<\/strong> SEMPI: A Database for Understanding Social Engagement in Video-Mediated Multiparty Interaction<br \/><em>M. Siniukov, Y. Yin, E. Fast, Y. Qi, A. Monga, A. Kim, and M. Soleymani<\/em><\/p>\n<p><strong><\/strong><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;panel&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; hover_enabled=&#8221;0&#8243; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221; sticky_enabled=&#8221;0&#8243;]<\/p>\n<p><strong>Day 1 15:30-16:30<\/strong> <strong>Panel: Multimodal Research in Latin America<\/strong><br \/><em>Session Chair: <span style=\"font-weight: 400;\">Prof. Daniel Gatica-Perez<\/span><br \/><\/em><\/p>\n<p>Prof. Carlos Busso, University of Texas Dallas, USA<br \/>Dr. Heloisa Candelo, IBM Brazil<br \/>Prof. Monica Perusquia-Hernandez, Nara Institute of Science and Technology, Japan<br \/>Prof. Laura Cabrera-Quiros, TEC, Costa Rica<\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;Poster1&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong><\/strong><\/p>\n<p><strong>Day 1 16:30-18:00<\/strong> Poster Presentations 1 (including DC posters)<br \/><em>Session Chair: Tariq Iqbal<\/em><\/p>\n<p style=\"padding-left: 40px;\">Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality<br \/><em>I. A. Bustoni, M. McGill, and S. Brewster<\/em><\/p>\n<p style=\"padding-left: 40px;\">The Plausibility Paradox on Interactions with Complex Virtual Objects in Virtual Environments<br \/><em>D. Alvarado-Chou and Y. Law<\/em><\/p>\n<p style=\"padding-left: 40px;\">First-Person Perspective Induces Stronger Feelings of Awe and Presence Compared to Third-Person Perspective in Virtual Reality<br \/><em>H. Otsubo, A. Marquardt, M. Steininger, M. Lehnort, F. Dollack, Y. Hirao, M. Perusquia-Hernandez, H. Uchiyama, E. Kruijff, B. Riecke, and K. Kiyokawa<\/em><\/p>\n<p style=\"padding-left: 40px;\">Poke Typing: Effects of Hand-Tracking Input and Key Representation on Mid-Air Text Entry Performance in Virtual Reality<br \/><em>M. Akhoroz and C. Yildirim<\/em><\/p>\n<p style=\"padding-left: 40px;\">Is Distance a Modality? Multi-Label Learning for Speech-Based Joint Prediction of Attributed Traits and Perceived Distances in 3D Audio Immersive Environments<br \/><em>E. Fringi, N. Alareef, L. Picinali, S. Brewster, T. Guha, and A. Vinciarelli<\/em><\/p>\n<p style=\"padding-left: 40px;\">Feeling Textiles through AI: An exploration into Multimodal Language Models and Human Perception Alignment<br \/><em>S. Zhong, E. Gatti, Y. Cho, and M. Obrist<\/em><\/p>\n<p style=\"padding-left: 40px;\">SemanticTap: A Haptic Toolkit for Vibration Semantic Design of Smartphone<br \/><em>R. Zhang, Y. Li, and Y. Jiao<\/em><\/p>\n<p style=\"padding-left: 40px;\">QuietSync: Integrating Multimodal Signals for Accurate Silent Speech Interaction with Head-Worn Devices<br \/><em>T. Srivastava, R. M. Winters, T. Gable, Y. T. Wang, T. LaScala, and I. Tashev<\/em><\/p>\n<p style=\"padding-left: 40px;\">NearFetch: Enhancing Touch-Based Mobile Interaction on Public Displays with an Embedded Programmable NFC Array<br \/><em>Q. Cao, J. Zhang, S. Fan, J. Rong, M. Qi, Z. Duan, P. Zhao, L. Liu, Z. Zhou, and W. Chen<\/em><\/p>\n<p style=\"padding-left: 40px;\">ScentHaptics: Augmenting the Haptic Experiences of Digital Mid-Air Textiles with Scent<br \/><em>C. Dawes, J. Xue, G. Brianza, P. Cornelio, R. Montano Murillo, E. Maggioni, and M. Obrist<\/em><\/p>\n<p style=\"padding-left: 40px;\">LLM-powered Multimodal Insight Summarization for UX Testing<br \/><em>K. Turbeville, J. Muengtaweepongsa, S. Stevens, J. Moss, A. Pon, K. Lee, C. Mehra, J. Gutierrez Villalobos, and R. Kumar<\/em><\/p>\n<p style=\"padding-left: 40px;\">Generalization Boost in Bimodal Classification via Data Fusion Trained on Sparse Datasets<br \/><em>W. Yu, D. Kolossa, and R. Nickel<\/em><\/p>\n<p style=\"padding-left: 40px;\">A multimodal analysis of environmental stress experienced by older adults during outdoor walking trips: Implications for designing new intelligent technologies to enhance walkability in low-income Latino communities<br \/><em>R. Yupanqui, J. Sohn, Y. Kim, R. Flores, H. Lee, J. Kim, S. Lee, Y. Ham, C. Lee, and T. Chaspari<\/em><\/p>\n<p style=\"padding-left: 40px;\">\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;DC&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 1 16:30-18:00<\/strong> Doctoral Consortium Papers Poster Session<br \/><em>Session Chair: Micol Spitale<\/em><\/p>\n<p style=\"padding-left: 40px;\">Towards Trustworthy and Efficient Diffusion Models<br \/><em>Jayneel Vora<\/em><\/p>\n<p style=\"padding-left: 40px;\">Video Game Technologies Applied for Teaching Assembly Language Programming<br \/><em>Ernesto Rivera<\/em><\/p>\n<p style=\"padding-left: 40px;\">Towards Automatic Social Involvement Estimation<br \/><em>Zonghuan Li<\/em><\/p>\n<p style=\"padding-left: 40px;\">A Musical Robot for People with Dementia<br \/><em>Paul Raingeard de la Bletiere<\/em><\/p>\n<p style=\"padding-left: 40px;\">Investigating Multi-Reservoir Computing for EEG-based Emotion Recognition<br \/><em>Anubhav Anubhav<\/em><\/p>\n<p style=\"padding-left: 40px;\">A Multimodal Understanding of the Eye-Mind Link<br \/><em>Megan Caruso<\/em><\/p>\n<p style=\"padding-left: 40px;\">Real-Time Trust Measurement in Human-Robot Interaction: Insights from Physiological Behaviours<br \/><em>Abdullah Alzahrani<\/em><\/p>\n<p style=\"padding-left: 40px;\">Designing Digital Multisensory Textile Experiences<br \/><em>Shu Zhong<\/em><\/p>\n<p style=\"padding-left: 40px;\">Modelling Social Intentions in Complex Conversational Settings<br \/><em>Ivan Kondyurin<\/em><\/p>\n<p style=\"padding-left: 40px;\">Enhancing Collaboration and Performance among EMS Students through Multimodal Learning Analytics<br \/><em>Vasundhara Joshi<\/em><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;OS3&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 2 10:30-12:00<\/strong> <strong>Oral Session 3: Affective Computing<\/strong><br \/><em>Session Chair: Carlos Busso<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:30-10:50<\/strong> Relating Students Cognitive Processes and Learner-Centered Emotions: An Advanced Deep Learning Approach<br \/><em>A. T S and G. Biswas<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>10:50-11:10<\/strong> On Multimodal Emotion Recognition for Human-Chatbot Interaction in the Wild<br \/><em>N. Kovacevic, C. Holz, M. Gross, and R. Wampfler<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:10-11:30<\/strong> Towards Automated Annotation of Infant-Caregiver Engagement Phases with Multimodal Foundation Models<br \/><em>D. Withanage Don, D. Schiller, T. Hallmen, S. Mertes, T. Baur, F. Lingenfelser, M. M\u00fcller, L. Kaubisch, C. Reck, and E. Andr\u00e9<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>11:30-11:50<\/strong> Emotion Recognition for Multimodal Recognition of Attachment in School-Age Children<br \/><em>A. Buker and A. Vinciarelli<\/em><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;OS4&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 2 13:30-14:30<\/strong> <strong>Oral Session 4: Special session on Personalization of Robot\u2019s Multimodal Behavior<\/strong><br \/><em>Session Chair: Silvia Rossi<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:30-13:50<\/strong> Multimodal User Enjoyment Detection in Human-Robot Conversation: The Power of Large Language Models<br \/><em>A. Pereira, L. Marcinek, J. Miniotaite, S. Thunberg, E. Lagerstedt, J. Gustafson, G. Skantze, and B. Irfan<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>13:50-14:10<\/strong> Predicting Human Intent to Interact with a Public Robot: The People Approaching Robots Database<em><\/em> (PAR-D<br \/><em>S. Thompson, A. Lew, Y. Li, E. Stanish, A. Huang, R. Phanse, and M. V\u00e1zquez<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>14:10-14:30<\/strong> M2RL: A Multimodal Multi-Interface Dataset for Robot Learning from Human Demonstrations<br \/><em>S. Hasan, M. Yasar, T. Iqbal<\/em><\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;Poster2&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 2 15:00-17:00<\/strong> Poster Presentations 2 &amp; Demo Session<br \/><em>Session Chair: Monica Perusquia<\/em><\/p>\n<p style=\"padding-left: 40px;\">Perceived Text Relevance Estimation Using Scanpaths and GNNs<br \/><em>A. Mohamed Selim, O. S. Bhatti, M. Barz, and D. Sonntag<\/em><\/p>\n<p style=\"padding-left: 40px;\">Juicy Text: Onomatopoeia and Semantic Text Effects for Juicy Player Experiences<br \/><em>E. Fabre, K. Seaborn, A. Verhulst, Y. Itoh, and J. Rekimoto<\/em><\/p>\n<p style=\"padding-left: 40px;\">Learning Co-Speech Gesture Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation<br \/><em>E. Ghaleb, B. Khaertdinov, W. Pouw, M. Rasenberg, J. Holler, A, Ozyurek, R. Fernandez<\/em><\/p>\n<p style=\"padding-left: 40px;\">Multilingual Dyadic Interaction Corpus NoXi+J: Toward Understanding Asian-European Non-verbal Cultural Characteristics and their Influences on Engagement<br \/><em>M. Funk, S. Okada, and E. Andr\u00e9<\/em><\/p>\n<p style=\"padding-left: 40px;\">Exploring Interlocutor Gaze Interactions in Conversations based on Functional Spectrum Analysis<br \/><em>A. Tashiro, M. Imamura, S. Kumano, and K. Otsuka<\/em><\/p>\n<p style=\"padding-left: 40px;\">Predictability of Understanding in Explanatory Interactions Based on Multimodal Cues<br \/><em>O. Turk, S. Lazarov, Y. Wang, H. Buschmeier, A. Grimminger, and P. Wagner<\/em><\/p>\n<p style=\"padding-left: 40px;\">Can Text-to-image Model Assist Multi-modal Learning for Visual Recognition with Visual Modality Missing?<br \/><em>T. Feng, D. Yang, D. Bose, and S. Narayanan<\/em><\/p>\n<p style=\"padding-left: 40px;\">Automatic mild cognitive impairment estimation from the group conversation of coimagination method<br \/><em>S. Li, K. Kumagai, M. Otake-Matsuura, and S. Okada<\/em><\/p>\n<p style=\"padding-left: 40px;\">Lip Abnormality Detection for Patients with Repaired Cleft Lip and Palate: A Lip Normalization Approach<br \/><em>K. Rosero, A. Salman, R. R. Hallac, and C. Busso<\/em><\/p>\n<p style=\"padding-left: 40px;\">&#8220;Uh, This One?&#8221;: Leveraging Behavioral Signals for Detecting Confusion during Physical Tasks<br \/><em>M. Stiber, D. Bohus, and S. Andrist<\/em><\/p>\n<p style=\"padding-left: 40px;\">Understanding Non-Verbal Irony Markers: Machine Learning Insights Versus Human Judgment<br \/><em>M. Spitale, F. Catania, and F. Panzeri<\/em><\/p>\n<p style=\"padding-left: 40px;\">\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;Demo&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 2 15:00-17:00<\/strong> Demo Session<br \/><em>Session Chair: Raj Tumuluri<\/em><\/p>\n<p style=\"padding-left: 40px;\">An Adaptive GPT-4-powered Socially Interactive Agent for Conversing about Health<br \/><em>J. Molto, U. Visser, J. Fields, and C. Lisetti<\/em><\/p>\n<p style=\"padding-left: 40px;\">An AI-Powered Interactive Interface to Enhance Accessibility of Interview Training for Military Veterans<br \/><em>R. C. Yarlagadda, P. Aggarwal, V. Jamadagni, G. Mahajani, P. Malasani, E. H. Nirjhar, and T. Chaspari<\/em><\/p>\n<p style=\"padding-left: 40px;\">Combining Generative and Discriminative AI for High-Stakes Interview Practice<br \/><em>C. W. Leong, N. Jawahar, V. Basheerabad, T. W\u00f6rtwein, A. Emerson, and G. Sivan<\/em><\/p>\n<p style=\"padding-left: 40px;\">Enhancing Biodiversity Monitoring: An Interactive Tool for Efficient Identification of Species in Large Bioacoustics Datasets<br \/><em>H. Kath, I. Troshani, B. L\u00fcers, T. S. Gouv\u00eaa, and D. Sonntag<\/em><\/p>\n<p style=\"padding-left: 40px;\">ARCADE: An Augmented Reality Display Environment for Multimodal Interaction with Conversational Agents<br \/><em>C. Schindler, D. Mayumi, Y. Matsuda, N. Rach, K. Yasumoto, and W. Minker<\/em><\/p>\n<p style=\"padding-left: 40px;\">Let&#8217;s Dance Together! AI Dancers Can Dance to Your Favorite Music and Style<br \/><em>R. Ishii, S. Eitoku, S. Matsuo, M. Makiguchi, A. Hoshi, and L. P. Morency<\/em><\/p>\n<p style=\"padding-left: 40px;\">Human Contact Annotator: Annotating Physical Contact in Dyadic Interactions<br \/><em>M. Doyran, A. A. Salah, and R. Poppe<\/em><\/p>\n<p style=\"padding-left: 40px;\">Bespoke: Using LLM agents to generate just-in-time interfaces by reasoning about user intent<br \/><em>P. Nandy, S. O. Adalgeirsson, A. K. Sinha, T. Kraljic, M. Cleron, L. Shi, A. Singh, A. Chaudhary, A. Ganti, C. A. Melancon, S. Zhang, D. Robishaw, H. Ciurdar, J. Secor, K. A. Robertsen, K. Climer, M. Le, M. Venkatesan, P. Chi, P. Li, P. F. McDermott, R. Shim, S. Onsan, S. Vaishnav, and S. Guam\u00e1n<\/em><\/p>\n<p>&nbsp;<\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;OS5&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 3 10:30-12:00<\/strong> <strong>Oral Session 5: Biomedical Data Processing<\/strong><br \/><em>Session Chair: Ali Etemad<\/em><\/p>\n<p style=\"padding-left: 40px;\">Putting the \u201cBrain\u201d Back in the Eye-Mind Link: Aligning Eye Movements and Brain Activations During Naturalistic Reading<br \/><em>M. Caruso, R. Southwell, L. Hirshfield, and S. D&#8217;Mello<\/em><\/p>\n<p style=\"padding-left: 40px;\">Distinguishing Target and Non-Target Fixations with EEG and Eye Tracking in Realistic Visual Scenes<br \/><em>M. Sharma, C. Mart\u00ednez, B. Wirth, A. Kr\u00fcger, and P. M\u00fcller<\/em><\/p>\n<p style=\"padding-left: 40px;\">Detecting Deception in Natural Environments Using Incremental Transfer Learning<br \/><em>M. Ahmad, A. Alzahrani, and S. Ahmad<\/em><\/p>\n<p style=\"padding-left: 40px;\">Stressor Type Matters! &#8212; Exploring Factors Influencing Cross-Dataset Generalizability of Physiological Stress Detection<br \/><em>P. Prajod, B. Mahesh, and E. Andr\u00e9<\/em><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;Challenge&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 3 <\/strong><strong>14:30-15:00<\/strong> Challenge Overview<br \/><em>Session Chair: Ronald B\u00f6ck<\/em><\/p>\n<p style=\"padding-left: 40px;\">Introduction to Grand Challenges 2024<br \/><em>Ronald B\u00f6ck<\/em><\/p>\n<p style=\"padding-left: 40px;\">Empathic Virtual Agent Challenge: Appraisal-based Recognition of Affective States<br \/><em>Safaa Azzakhnini<\/em><\/p>\n<p style=\"padding-left: 40px;\">ERR@HRI 2024 Challenge: Multimodal Detection of Errors and Failures in Human-Robot Interactions<br \/><em>Micol Spitale<\/em><\/p>\n<p style=\"padding-left: 40px;\">\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;BlueSky&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 3 15:30-16:30<\/strong> Blue Sky Papers<br \/><em>Session Chair: Ali Etemad<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>15:30-15:50<\/strong> AI as Modality in Human Augmentation: Toward New Forms of Multimodal Interaction with AI-Embodied Modalities<br \/><em>R.-D. Vatavu<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>15:50-16:10<\/strong> RealSeal: Revolutionizing Media Authentication with Real-Time Realism Scoring<br \/><em>B. Radharapu and H. Krishna<\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>16:10-16:30<\/strong> Everything We Hear: Towards Tackling Misinformation in Podcasts<br \/><em>S. P. Cherumanal, U. Gadiraju and D. Spina<\/em><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;Poster3&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 3 <\/strong><strong>16:30-18:00<\/strong> Poster Presentations 3 &amp; Late Breaking Results<br \/><em>Session Chair: Radoslaw Niewiadomski<\/em><\/p>\n<p style=\"padding-left: 40px;\">Generating Facial Expression Sequences of Complex Emotions with Generative Adversarial Networks<br \/><em>Z. Belmekki, D. G\u00f3mez J\u00e1uregui, P. Reuter, J. Li, J. C. Martin, K. Jenkins, and N. Couture<\/em><\/p>\n<p style=\"padding-left: 40px;\">Envisioning Futures: How the Modality of AI Recommendations Impacts Conversation Flow in AR-enhanced Dialogue<br \/><em>S. Villa, Y. Weiss, M. Y. Lu, M. Ziarko, A. Schmidt, and J. Niess<\/em><\/p>\n<p style=\"padding-left: 40px;\">Across Trials vs Subjects vs Contexts: A Multi-Reservoir Computing Approach for EEG Variations in Emotion Recognition<br \/><em>A. Anubhav and K. Fujiwara<\/em><\/p>\n<p style=\"padding-left: 40px;\">Detecting Aware and Unaware Mind Wandering During Lecture Viewing: A Multimodal Machine Learning Approach Using Eye Tracking, Facial Videos and Physiological Data<br \/><em>B. B\u00fchler, E. Bozkir, H. Deininger, P. Goldberg, P. Gerjets, U. Trautwein, and E. Kasneci<\/em><\/p>\n<p style=\"padding-left: 40px;\">MR-Driven Near-Future Realities: Previewing Everyday Life Real-World Experiences Using Mixed Reality<br \/><em>F. Mathis, B. Myers, B. Lafreniere, M. Glueck, and D. Porpino Sobreira Marques<\/em><\/p>\n<p style=\"padding-left: 40px;\">Integrating Multimodal Affective Signals for Stress Detection from Audio-Visual Data<br \/><em>D. Ghose, O. Gitelson, and B. Scassellati<\/em><\/p>\n<p style=\"padding-left: 40px;\">Anonymous-Corpus: A Multimodal Database for Understanding Video-Learning Experience<br \/><em>A. Salman, N. Wang, L. Martinez-Lucas, A. Vidal, and C. Busso<\/em><\/p>\n<p style=\"padding-left: 40px;\">NapTune: Prompt-tuning for Mood Classification with Wearable Time-series along with Previous Night&#8217;s Sleep-related Measures<br \/><em>D. Shome, N. Montazeri Ghahjaverestan, and A. Etemad<\/em><\/p>\n<p style=\"padding-left: 40px;\">Improving Usability of Data Charts in Multimodal Documents for Low Vision Users<br \/><em>Y. Prakash, A. Kolgar, Nayak, S. Alyaan, P. A. Khan, H. N. Lee, and V. Ashok<\/em><\/p>\n<p style=\"padding-left: 40px;\">Participation Role-Driven Engagement Estimation of ASD Individuals in Neurodiverse Group Discussions<br \/><em>K. Stefanov, Y. Nakano, C. Kobayashi, I. Hoshina, T. Sakato, F. Nihei, C. Takayama, R. Ishii, and M. Tsujii<\/em><\/p>\n<p style=\"padding-left: 40px;\">Detecting Autism from Head Movements using Kinesics<br \/><em>M. Gokmen, E. Sariyanidi, L. Yankowitz, C. J. Zampella, R. T. Schultz, and B. Tunc<\/em><\/p>\n<p style=\"padding-left: 40px;\">Perception of Stress: A Comparative Multimodal Analysis of Time-Continuous Stress Ratings from Self and Observers<br \/><em>E. H. Nirjhar, W. Arthur Jr., and T. Chaspari<\/em><\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text module_id=&#8221;LBR&#8221; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Day 3 16:30-18:00 <\/strong>Late Breaking Results<strong><br \/><\/strong><em>Session Chair: Ronald B\u00f6ck<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">User-Defined Interaction for Very Low-Cost Head-Mounted Displays<br \/><em>Y. C. Law, H. Mendieta-D\u00e1vila, D. Garc\u00eda-Fallas, R. G. Quiros, and M. Chac\u00f3n-Rivas<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Effects of Incoherence in Multimodal Explanations of Robot Failures<br \/><em>P. Pramanick, N. Federico, L. Raggioli, A. Rossi, and S. Rossi<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Design and Preliminary Evaluation of a Stress Reflection System for High-Stress Training Environments<br \/><em>S. Akiri, V. Joshi, S. Taherzadeh, G. Williams, H. M. Mentis, and A. Kleinsmith<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Haptic Feedback to Reduce Individual Differences in Corrective Actions for Skill Learning<br \/><em>S. Ono, N. Ninomiya, and H. Kanai<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Towards Multimodality: Comparing Quantifications of Movement Coordination<br \/><em>C. Fan, V. Romero, A. Paxton, and T. Chowdhury<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Unlocking the Potential of Multimodal Compositionality for Enhanced Recommendations through Sentiment Analysis<br \/><em>S. Nazir and M. Sadrzadeh<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Enhancing Autism Spectrum Disorder Screening: Implementation and Pilot Testing of a Robot-Assisted Digital Tool<br \/><em>A. Di Nuovo and A. Kay<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Understanding LLMs Ability to Aid Malware Analysts in Bypassing Evasion Techniques<br \/><em>M. Y. Wong, K. Valakuzhy, M. Ahamad, D. Blough, and F. Monrose<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">&#8220;Is This It?&#8221;: Towards Ecologically Valid Benchmarks for Situated Collaboration<br \/><em>D. Bohus, S. Andrist, Y. Bao, E. Horvitz, and A. Paradiso<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">An Audiotactile System for Accessible Graphs on a Coordinate Plane<br \/><em>C. Yang and P. Taele<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Levels of Multimodal Interaction<br \/><em>A. K. Sinha, A. Olwal, and C. Kulkarni<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Comparing Subjective Measures of Workload in Video Game Play: Evaluating the Test-Retest Reliability of the VGDS and NASA-TLX<br \/><em>E. Pretty, R. L. Martins Guarese, H. Fayek, and F. Zambetta<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Towards Investigating Biases in Spoken Conversational Search<br \/><em>S. P. Cherumanal, J. R. Trippas, and D. Spina<\/em><\/p>\n<p 40px=\"\" style=\"padding-left: 40px;\">Crossmodal Correspondences between Piquancy\/Spiciness and Visual Shape<br \/><em>Y. Wang, M. Ohno, T. Narumi, and Y. ah Seong<\/em><strong>\u00a0<\/strong><\/p>\n<p style=\"padding-left: 40px;\">\n<p style=\"padding-left: 40px;\"><em><\/em><\/p>\n<p style=\"padding-left: 40px;\"><em><\/em><\/p>\n<p>[\/et_pb_text][et_pb_text module_id=&#8221;AEGC2&#8243; _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; header_text_color=&#8221;#282562&#8243; header_4_text_color=&#8221;#672B83&#8243; custom_padding=&#8221;||0px|||&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Adjunct Events Day 2 09:00-12:00<\/strong> Grand Challenge 2 (EVAC)<br \/><em>Session Chair: <span style=\"font-weight: 400;\">Safaa Azzakhnini<\/span><\/em><\/p>\n<p style=\"padding-left: 40px;\"><strong>09:00<\/strong> <span style=\"font-weight: 400;\">EVAC\u20192024 Opening &amp; Challenge Introduction<\/span><\/p>\n<p style=\"padding-left: 40px;\"><strong>09:40<\/strong> EVAC\u20192024 Contribution &#8211; Johns Hopkins Center for Language and Speech Processing<\/p>\n<p style=\"padding-left: 40px;\"><strong>10:30<\/strong> EVAC\u20192024 Challenge Results<\/p>\n<p style=\"padding-left: 40px;\"><strong>10:40<\/strong> EVAC\u20192024 Panel Discussion<\/p>\n<p style=\"padding-left: 40px;\"><strong>11:30<\/strong> Closing<\/p>\n<p style=\"padding-left: 40px;\">[\/et_pb_text][et_pb_text _builder_version=&#8221;4.14.4&#8243; _module_preset=&#8221;default&#8221; global_colors_info=&#8221;{}&#8221; theme_builder_area=&#8221;post_content&#8221;]<\/p>\n<p><strong>Papers not presented in the conference (Included in the main proceedings)<\/strong><\/p>\n<p style=\"padding-left: 40px;\"><span style=\"font-weight: 400;\">SMURF: Statistical Modality Uniqueness and Redundancy Factorization <\/span><b>nominated for best paper award<\/b><i><br \/><span style=\"font-weight: 400;\">W\u00f6rtwein, N. Allen, J. Cohn, L. P. Morency<\/span><\/i><\/p>\n<p style=\"padding-left: 40px;\">The impact of auditory warning types and emergency obstacle avoidance takeover scenarios on takeover behavior<br \/><em>X. Li and Z. Xu<\/em><\/p>\n<p style=\"padding-left: 40px;\">Low-Rank Adaptation of Time Series Foundational Models for Out-of-Domain Modality Forecasting<br \/><em>D. Gupta, A. Bhatti, S. Parmar, C. Dan, Y. Liu, B. Shen, and S. Lee<\/em><\/p>\n<p style=\"padding-left: 40px;\">Nonverbal Dynamics in Dyadic Videoconferencing Interaction: The Role of Video Resolution and Conversational Quality<br \/><em>C. Diao, S. Arevalo Arboleda, and A. Raake<\/em><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sessions This is a tentative schedule for the Conference Program, we expect to make a few changes soon. &nbsp;Adjunt Events Day 1 08:00-17:30 Doctoral Consortium Session Chair: Yukiko Nakano 09:25 Opening and Welcome Speaker: Micol Spitale Session 1: Education Chair: Micol Spitale 09:30 Enhancing Collaboration and Performance among EMS Students through Multimodal Learning Analytics Vasundhara [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>This is an example page. It's different from a blog post because it will stay in one place and will show up in your site navigation (in most themes). Most people start with an About page that introduces them to potential site visitors. It might say something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>Hi there! I'm a bike messenger by day, aspiring actor by night, and this is my website. I live in Los Angeles, have a great dog named Jack, and I like pi\u00f1a coladas. (And gettin' caught in the rain.)<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>...or something like this:<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><p>The XYZ Doohickey Company was founded in 1971, and has been providing quality doohickeys to the public ever since. Located in Gotham City, XYZ employs over 2,000 people and does all kinds of awesome things for the Gotham community.<\/p><\/blockquote>\n<!-- \/wp:quote -->\n\n<!-- wp:paragraph -->\n<p>As a new WordPress user, you should go to <a href=\"https:\/\/icmi.acm.org\/2024\/wp-admin\/\">your dashboard<\/a> to delete this page and create new pages for your content. Have fun!<\/p>\n<!-- \/wp:paragraph -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-1990","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1990","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/comments?post=1990"}],"version-history":[{"count":38,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1990\/revisions"}],"predecessor-version":[{"id":2099,"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/pages\/1990\/revisions\/2099"}],"wp:attachment":[{"href":"https:\/\/icmi.acm.org\/2024\/wp-json\/wp\/v2\/media?parent=1990"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}