|
|
ICMI 2017 Conference Programme
Monday, 13 November 2017
Location: Sir Alwyn Williams Building, University of Glasgow
(The registration desk will open from 8:00 to 17:00 in the foyer)
09:00 - 12:30 |
Tutorial: Computational Interaction Design
Dr. John Williamson
Room: 422
|
09:30 - 18:15 |
Doctoral Consortium
CLICK HERE for the full schedule
Room: Level 5
|
09:00 - 17:30 |
Workshops
|
|
SIAA 2017: 1st International Workshop on Investigating Social Interactions with Artificial Agents
Organisers: Thierry Chaminade, Fabrice Lefevre, Noël Ngyuen, and Magalie Ochs
Room: 423
|
|
MIE 2017: 1st International Workshop on Multimodal Interaction for Education
Organisers: Gualtiero Volpe, Monica Gori, Nadia Bianchi-Berthouze, Gabriel Baud-Bovy, Paolo Alborno and Erica Volta
Room: 303
|
|
MHFI 2017: 2nd International Workshop on Multi-sensorial Approaches to Human-Food Interaction
Organisers: Carlos Velasco, Anton Nijholt, Marianna Obrist, Katsunori Okajima, H. N. J. Schifferstein, and Charles Spence
Room: 404
|
|
WOCCI 2017: 6th International Workshop on Child Computer Interaction
Organisers: Keelan Evanini, Maryam Najafian, Saeid Safavi, and Kay Berkling
Room: F121
|
10:15 - 10:45 |
Coffee Break
|
12:30 - 14:00 |
Lunch
|
15:30 - 16:00 |
Coffee Break
|
Tuesday, 14 November 2017
Location: Hilton Glasgow Grosvenor
(The registration desk will open from 08:00)
09:00 - 09:15 |
Conference Opening
Room: Grosvenor Suite
General Chairs: Alessandro Vinciarelli and Edward Lank
|
09:15 - 10:15 |
Keynote: Gastrophysics: Using Technology to Enhance the Experience of Food and Drink
Prof. Charles Spence
Room: Grosvenor Suite
Session Chair: Stephen Brewster (University of Glasgow)
|
10:15 - 10:45 |
Coffee Break
|
10:45 - 12:35 |
Session 1: Children and Interaction
Room: Grosvenor Suite
Session Chair: Ed Lank (University of Waterloo)
|
10:45 |
Tablets, Tabletops, and Smartphones: Cross-Platform Comparisons of Children’s Touchscreen Interactions
Julia Woodward, Alex Shaw, Aishat Aloba, Ayushi Jain, Jaime Ruiz, and Lisa Anthony
|
11:10 |
Toward an Efficient Body Expression Recognition Based on the Synthesis of a Neutral Movement
Arthur Crenn, Alexandre Meyer, Rizwan Ahmed Khan, Hubert Konik, and Saida Bouakaz
|
11:35 |
Interactive Narration with a Child: Impact of Prosody and Facial Expressions
Ovidiu Șerban, Mukesh Barange, Sahba Zojaji, Alexandre Pauchet, Adeline Richard, and Emilie Chanoni
|
12:00 |
Comparing Human and Machine Recognition of Children’s Touchscreen Stroke Gestures
Alex Shaw, Jaime Ruiz, and Lisa Anthony
|
12:35 - 14:00 |
Lunch
|
12:35 - 14:00 |
Ethics Panel
Room: Grosvenor Suite
Chair: Cosmin Munteanu (University of Toronto)
|
14:00 - 15:30 |
Session 2: Understanding Human Behaviour
Room: Grosvenor Suite
Session Chair: Cosmin Munteanu (University of Toronto)
|
14:00 |
Estimating Verbal Expressions of Task and Social Cohesion in Meetings by Quantifying Paralinguistic Mimicry
Marjolein Nanninga, Yanxia Zhang, Nale Lehmann-Willenbrock, Zoltan Szlávik, and Hayley Hung
|
14:25 |
Predicting the Distribution of Emotion Perception: Capturing Inter-rater Variability
Biqiao Zhang, Georg Essl, and Emily Mower Provost
|
14:50 |
Automatically Predicting Human Knowledgeability through Non-verbal Cues
Abdelwahab Bourai, Tadas Baltrušaitis, and Louis-Philippe Morency
|
15:15 |
Pooling Acoustic and Lexical Features for the Prediction of Valence (Short paper)
Zakaria Aldeneh, Soheil Khorram, Dimitrios Dimitriadis, and Emily Mower Provost
|
15:30 - 16:00 |
Coffee Break |
16:00 - 18:00 |
Poster Session 1
|
|
Automatic Assessment of Communication Skill in Non-conventional Interview Settings: A Comparative Study
Pooja Rao S. B., Sowmya Rasipuram, Rahul Das, and Dinesh Babu Jayagopi
|
|
Low-Intrusive Recognition of Expressive Movement Qualities
Radoslaw Niewiadomski, Maurizio Mancini, Stefano Piana, Paolo Alborno, Gualtiero Volpe, and Antonio Camurri
|
|
Digitising a Medical Clerking System with Multimodal Interaction Support
Harrison South, Martin Taylor, Huseyin Dogan, and Nan Jiang
|
|
GazeTap: Towards Hands-Free Interaction in the Operating Room
Benjamin Hatscher, Maria Luz, Lennart E. Nacke, Norbert Elkmann. Veit Müller, and Christian Hansen
|
|
Boxer: A Multimodal Collision Technique for Virtual Objects
Byungjoo Lee, Qiao Deng, Eve Hoggan, and Antti Oulasvirta
|
|
Trust Triggers for Multimodal Command and Control Interfaces
Helen Hastie, Xingkun Liu, and Pedro Patron
|
|
TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
|
|
A Multimodal System to Characterise Melancholia: Cascaded Bag of Words Approach
Shalini Bhatia, Munawar Hayat, and Roland Goecke
|
|
Crowdsourcing Ratings of Caller Engagement in Thin-Slice Videos of Human-Machine Dialog: Benefits and Pitfalls
Vikram Ramanarayanan, Chee Wee Leong, David Suendermann, and Keelan Evanini
|
|
Modelling Fusion of Modalities in Multimodal Interactive Systems with MMMM
Bruno Dumas, Jonathan Pirau, and Denis Lalanne
|
|
Temporal Alignment using the Incremental Unit Framework
Casey Kennington, Ting Han, and David Schlangen
|
|
Multimodal Gender Detection
Mohamed Abouelenien, Veronica Perez-Rosas, Rada Mihalcea, and Mihai Burzo
|
|
How May I Help You? Behavior and Impressions in Hospitality Service Encounters
Skanda Muralidhar, Marianne Schmid Mast, and Daniel Gatica-Perez
|
|
Tracking Liking State in Brain Activity while Watching Multiple Movies
Naoto Terasawa, Hiroki Tanaka, Sakriani Sakti, and Satoshi Nakamura
|
|
Does Serial Memory of Locations Benefit from Spatially Congruent Audiovisual Stimuli? Investigating the Effect of Adding Spatial Sound to Visuospatial Sequences
Benjamin Stahl and Georgios Marentakis
|
|
ZSGL: Zero Shot Gestural Learning
Naveen Madapana and Juan Wachs
|
|
Markov Reward Models for Analyzing Group Interaction
Gabriel Murray
|
|
Analyzing First Impressions of Warmth and Competence from Observable Nonverbal Cues in Expert-Novice Interactions
Beatrice Biancardi, Angelo Cafaro, and Catherine Pelachaud
|
|
The NoXi Database: Multimodal Recordings of Mediated Novice-Expert Interactions
Angelo Cafaro, Johannes Wagner, Tobias Baur, Soumia Dermouche, Mercedes Torres Torres, Catherine Pelachaud, Elisabeth André, and Michel Valstar
|
|
Head-Mounted Displays as Opera Glasses: Using Mixed-Reality to Deliver an Egalitarian User Experience during Live Events
Carl Bishop, Augusto Esteves, and Iain McGregor
|
16:00 - 18:00 |
Demo Session 1
|
|
Multimodal Interaction in Classrooms: Implementation of Tangibles in Integrated Music and Math Lessons
Jennifer Müller, Uwe Oestermeier, and Peter Gerjets
|
|
Web-Based Interactive Media Authoring System with Multimodal Interaction
Bok Deuk Song, Yeon Jun Choi, and Jong Hyun Park
|
|
Textured Surfaces for Ultrasound Haptic Displays
Euan Freeman, Ross Anderson, Julie R. Williamson, Graham Wilson, and Stephen A. Brewster
|
|
Rapid Development of Multimodal Interactive Systems: A Demonstration of Platform for Situated Intelligence
Dan Bohus, Sean Andrist, and Mihai Jalobeanu
|
|
MIRIAM: A Multimodal Chat-Based Interface for Autonomous Systems
Helen Hastie, Francisco Javier Chiyah Garcia, David A. Robb, Pedro Patron, and Atanas Laskov
|
|
SAM: The School Attachment Monitor
Dong-Bach Vo, Mohammad Tayarani, Maki Rooksby, Rui Huan, Alessandro Vinciarelli, Helen Minnis, and Stephen A. Brewster
|
|
The Boston Massacre History Experience
David Novick, Laura Rodriguez, Aaron Pacheco, Aaron Rodriguez, Laura Hinojos, Brad Cartwright, Marco Cardiel, Olivia Rodriguez-Herrera, Ivan Gris Sepulveda, and Enrique Ponce
|
|
Demonstrating TouchScope: A Hybrid Multitouch Oscilloscope Interface
Matthew Heinz, Sven Bertel, and Florian Echtler
|
|
The MULTISIMO Multimodal Corpus of Collaborative Interactions
Maria Koutsombogera and Carl Vogel
|
|
Using Mobile Virtual Reality to Empower People with Hidden Disabilities to Overcome Their Barriers
Matthieu Poyade, Glyn Morris, Ian Taylor, and Victor Portela
|
16:00 - 18:00 |
Doctoral Spotlight Session
|
|
Towards Designing Speech Technology Based Assistive Interfaces for Children's Speech Therapy
Revathy Nayar
|
|
Social Robots for Motivation and Engagement in Therapy
Katie Winkle
|
|
Immersive Virtual Eating and Conditioned Food Responses
Nikita Mae B. Tuanquin
|
|
Towards Edible Interfaces: Designing Interactions with Food
Tom Gayler
|
|
Towards a Computational Model for First Impressions Generation
Beatrice Biancardi
|
|
A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach
Esma Mansouri-Benssassi
|
|
Human-Centered Recognition of Children's Touchscreen Gestures
Alex Shaw
|
|
Cross-Modality Interaction between EEG Signals and Facial Expression
Soheil Rayatdoost
|
|
Hybrid Models for Opinion Analysis in Speech Interactions
Valentin Barriere
|
|
Evaluating Engagement in Digital Narratives from Facial Data
Rui Huan
|
|
Social Signal Extraction from Egocentric Photo-Streams
Maedeh Aghaei
|
|
Multimodal Language Grounding for Improved Human-Robot Collaboration: Exploring Spatial Semantic Representations in the Shared Space of Attention
Dimosthenis Kontogiorgos
|
19:00 - 20:00 |
Welcome Reception
Glasgow City Chambers
|
Wednesday, 15 November 2017
Location: Hilton Glasgow Grosvenor
09:00 - 10:00 |
Keynote: Situated Conceptualization: A Framework for Multimodal Interaction
Prof. Larry Barsalou
Room: Grosvenor Suite
Session Chair: Alessandro Vinciarelli (University of Glasgow)
|
10:00 - 10:30 |
Coffee Break
|
10:30 - 12:35 |
Session 3: Touch and Gesture
Room: Grosvenor Suite
Session Chair: Eve Hoggan (Aarhus University)
|
10:30 |
Hand-to-Hand: An Intermanual Illusion of Movement
Dario Pittera, Marianna Obrist, and Ali Israr
|
10:55 |
An Investigation of Dynamic Crossmodal Instantiation in TUIs
Feng Feng and Tony Stockman
|
11:20 |
"Stop over There": Natural Gesture and Speech Interaction for Non-critical Spontaneous Intervention in Autonomous Driving
Robert Tscharn, Marc Erich Latoschik, Diana Löffler, and Jörn Hurtienne
|
11:45 |
Pre-touch Proxemics: Moving the Design Space of Touch Targets from Still Graphics towards Proxemic Behaviors
Ilhan Aslan and Elisabeth André
|
12:10 |
Freehand Grasping in Mixed Reality: Analysing Variation during Transition Phase of Interaction (Short paper)
Maadh Al-Kalbani, Maite Frutos-Pascual, and Ian Williams
|
12:22 |
Rhythmic Micro-Gestures: Discreet Interaction On-the-Go (Short paper)
Euan Freeman, Gareth Griffiths, and Stephen A. Brewster
|
12:35 - 14:00 |
Lunch
|
12:35 - 14:00 |
SIGCHI Volunteering Session
Room: Grosvenor Suite
Chair: Aaron Quigley (University of St Andrews), ACM SIGCHI Vice President for Conferences
|
14:00 - 15:00 |
Session 4: Sound and Interaction
Room: Grosvenor Suite
Session Chair: Euan Freeman (University of Glasgow)
|
14:00 |
Evaluation of Psychoacoustic Sound Parameters for Sonification
Jamie Ferguson and Stephen A. Brewster
|
14:25 |
Utilising Natural Cross-Modal Mappings for Visual Control of Feature Based Sound Synthesis
Augoustinos Tsiros and Grégory Leplâtre
|
15:00 |
Poster Session 2
|
|
Analyzing Gaze Behavior during Turn-Taking for Estimating Empathy Skill Level
Ryo Ishii, Shiro Kumano, and Kazuhiro Otsuka
|
|
Text Based User Comments as a Signal for Automatic Language Identification of Online Videos
A. Seza Doğruöz, Natalia Ponomareva, Sertan Girgin, Reshu Jain, and Christoph Oehler
|
|
Gender and Emotion Recognition with Implicit User Signals
Maneesh Bilalpur, Seyed Mostafa Kia, Manisha Chawla, Tat-Seng Chua, and Ramanathan Subramanian
|
|
Animating the Adelino Robot with ERIK: The Expressive Robotics Inverse Kinematics
Tiago Ribeiro and Ana Paiva
|
|
Automatic Detection of Pain from Spontaneous Facial Expressions
Fatma Meawad, Su-Yin Yang, and Fong Ling Loy
|
|
Evaluating Content-Centric vs. User-Centric Ad Affect Recognition
Abhinav Shukla, Shruti Gullapuram, Harish Katti, Karthik Yadati, Mohan Kankanhalli, and Ramanathan Subramanian
|
|
A Domain Adaptation Approach to Improve Speaker Turn Embedding using Face Representation
Nam Le and Jean-Marc Odobez
|
|
Computer Vision Based Fall Detection by a Convolutional Neural Network
Miao Yu, Liyun Gong, and Stefanos Kollias
|
|
Predicting Meeting Extracts in Group Discussions using Multimodal Convolutional Neural Networks
Fumio Nihei, Yukiko I. Nakano, and Yutaka Takase
|
|
The Relationship between Task-Induced Stress, Vocal Changes, and Physiological State during a Dyadic Team Task
Catherine Neubauer, Mathieu Chollet, Sharon Mozgai, Mark Dennison, Peter Khooshabeh, and Stefan Scherer
|
|
Meyendtris: A Hands-Free, Multimodal Tetris Clone using Eye Tracking and Passive BCI for Intuitive Neuroadaptive Gaming
Laurens R. Krol, Sarah-Christin Freytag, and Thorsten O. Zander
|
|
AMHUSE: A Multimodal dataset for HUmour SEnsing
Giuseppe Boccignone, Donatello Conte, Vittorio Cuculo, and Raffaella Lanzarotti
|
|
GazeTouchPIN: Protecting Sensitive Data on Mobile Devices using Secure Multimodal Authentication
Mohamed Khamis, Mariam Hassib, Emanuel von Zezschwitz, Andreas Bulling, and Florian Alt
|
|
Multi-task Learning of Social Psychology Assessments and Nonverbal Features for Automatic Leadership Identification
Cigdem Beyan, Francesca Capozzi, Cristina Becchio, and Vittorio Murino
|
|
Multimodal Analysis of Vocal Collaborative Search: A Public Corpus and Results
Daniel McDuff, Paul Thomas, Mary Czerwinski, and Nick Craswell
|
|
UE-HRI: A New Dataset for the Study of User Engagement in Spontaneous Human-Robot Interactions
Atef Ben Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim
|
|
Mining a Multimodal Corpus of Doctor’s Training for Virtual Patient’s Feedbacks
Chris Porhet, Magalie Ochs, Jorane Saubesty, Grégoire de Montcheuil, and Roxane Bertrand
|
|
Multimodal Affect Recognition in an Interactive Gaming Environment using Eye Tracking and Speech Signals
Ashwaq Alhargan, Neil Cooke, and Tareq Binjammaz
|
15:00 |
Demo Session 2
|
|
Bot or Not: Exploring the Fine Line between Cyber and Human Identity
Mirjam Wester, Matthew P. Aylett, and David A. Braude
|
|
Modulating the Non-verbal Social Signals of a Humanoid Robot
Amol Deshmukh, Bart Craenen, Alessandro Vinciarelli, and Mary Ellen Foster
|
|
Thermal In-Car Interaction for Navigation
Patrizia Di Campli San Vito, Stephen A. Brewster, Frank Pollick, and Stuart White
|
|
AQUBE: An Interactive Music Reproduction System for Aquariums
Daisuke Sasaki, Musashi Nakajima, and Yoshihiro Kanno
|
|
Real-Time Mixed-Reality Telepresence via 3D Reconstruction with HoloLens and Commodity Depth Sensors
Michal Joachimczak, Juan Liu, and Hiroshi Ando
|
|
Evaluating Robot Facial Expressions
Ruth Aylett, Frank Broz, Ayan Ghosh, Peter McKenna, Gnanathusharan Rajendran, Mary Ellen Foster, Giorgio Roffo, and Alessandro Vinciarelli
|
|
Bimodal Feedback for In-Car Mid-Air Gesture Interaction
Gözel Shakeri, John H. Williamson, and Stephen A. Brewster
|
|
A Modular, Multimodal Open-Source Virtual Interviewer Dialog Agent
Kirby Cofino, Vikram Ramanarayanan, Patrick Lange, David Pautler, David Suendermann-Oeft, and Keelan Evanini
|
|
Wearable Interactive Display for the Local Positioning System (LPS)
Daniel Lofaro, Christopher Taylor, Ryan Tse, and Donald Sofge
|
15:00 |
Grand Challenge Posters
|
|
From Individual to Group-Level Emotion Recognition: EmotiW 5.0
Abhinav Dhall, Roland Goecke, Shreya Ghosh, Jyoti Joshi, Jesse Hoey and Tom Gedeon
|
|
Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild
Dae Ha Kim, Min Kyu Lee, Dong Yoon Choi and Byung Cheol Song
|
|
Modeling Multimodal Cues in a Deep Learning-Based Framework for Emotion Recognition in the Wild
Stefano Pini, Olfa Ben Ahmed, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara and Benoit Huet
|
|
Group-Level Emotion Recognition using Transfer Learning from Face Identification
Alexandr Rassadin, Alexey Gruzdev and Andrey Savchenko
|
|
Group Emotion Recognition with Individual Facial Emotion CNNs and Global Image Based CNNs
Lianzhi Tan, Kaipeng Zhang, Kai Wang, Xiaoxing Zeng, Xiaojiang Peng and Yu Qiao
|
|
Learning Supervised Scoring Ensemble for Emotion Recognition in the Wild
Ping Hu, Dongqi Cai, Shandong Wang, Anbang Yao and Yurong Chen
|
|
Group Emotion Recognition in the Wild by Combining Deep Neural Networks for Facial Expression Classification and Scene-Context Analysis
Asad Abbas and Stephan K. Chalup
|
|
Temporal Multimodal Fusion for Video Emotion Classification in the Wild
Valentin Vielzeuf, Stéphane Pateux and Frederic Jurie
|
|
Audio-Visual Emotion Recognition using Deep Transfer Learning and Multiple Temporal Models
Xi Ouyang, Shigenori Kawaai, Ester Gue Hua Goh, Shengmei Shen, Wan Ding, Huaiping Ming and Dong-Yan Huang
|
|
Multi-Level Feature Fusion for Group-Level Emotion Recognition
Balaji B. and V. Ramana Murthy Oruganti
|
|
A New Deep-Learning Framework for Group Emotion Recognition
Qinglan Wei, Yijia Zhao, Qihua Xu, Liandong Li, Jun He, Lejun Yu and Bo Sun
|
|
Emotion Recognition in the Wild using Deep Neural Networks and Bayesian Classifiers
Luca Surace, Massimiliano Patacchiola, Elena Battini Sonmez, William Spataro and Angelo Cangelosi
|
|
Emotion Recognition with Multimodal Features and Temporal Models
Shuai Wang, Wenxuan Wang, Jinming Zhao, Shizhe Chen, Qin Jin, Shilei Zhang and Yong Qin
|
|
Group-Level Emotion Recognition using Deep Models on Image Scene, Faces, and Skeletons
Xin Guo, Luisa F. Polania and Kenneth E. Barner
|
15:30 |
Coffee Break
|
17:00 - 18:00 |
Sustained Contribution Awardee Talk: Steps towards Collaborative Multimodal Dialogue
Dr. Phil Cohen
Room: Grosvenor Suite
Session Chair: Julie Williamson (University of Glasgow)
|
18:00 |
Banquet, sponsored by OpenStream
Òran Mór
|
Thursday, 16 November 2017
Location: Hilton Glasgow Grosvenor
09:00 - 10:00 |
Keynote: Collaborative Robots: From Action and Interaction to Collaboration
Prof. Danica Kragic
Room: Grosvenor Suite
Session Chair: Louis-Philippe Morency (Carnegie Mellon University)
|
10:00 - 10:30 |
Coffee Break
|
10:30 - 12:35 |
Session 5: Methodology
Room: Grosvenor Suite
Session Chair: Catherine Pelachaud (UPMC)
|
10:30 |
Automatic Classification of Auto-correction Errors in Predictive Text Entry Based on EEG and Context Information
Felix Putze, Maik Schünemann, Tanja Schultz, and Wolfgang Stuerzlinger
|
10:55 |
Cumulative Attributes for Pain Intensity Estimation
Joy Onyekachukwu Egede and Michel Valstar
|
11:20 |
Towards the Use of Social Interaction Conventions as Prior for Gaze Model Adaptation
Rémy Siegfried, Yu Yu, and Jean-Marc Odobez
|
11:45 |
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
Minghai Chen, Sen Wang, Paul Pu Liang, Tadas Baltrušaitis, Amir Zadeh, and Louis-Philippe Morency
|
12:10 |
IntelliPrompter: Speech-Based Dynamic Note Display Interface for Oral Presentations
Reza Asadi, Ha Trinh, Harriet Fell, and Timothy Bickmore
|
12:35 - 14:00 |
Lunch
|
14:00 - 16:00 |
Session 6: Artificial Agents and Wearable Sensors
Room: Grosvenor Suite
Session Chair: Mary Ellen Foster (University of Glasgow)
|
14:00 |
Head and Shoulders: Automatic Error Detection in Human-Robot Interaction
Pauline Trung, Manuel Giuliani, Michael Miksch, Gerald Stollnberger, Susanne Stadler, Nicole Mirnig, and Manfred Tscheligi
|
14:25 |
The Reliability of Non-verbal Cues for Situated Reference Resolution and Their Interplay with Language: Implications for Human Robot Interaction
Stephanie Gross, Brigitte Krenn, and Matthias Scheutz
|
14:50 |
Do You Speak to a Human or a Virtual Agent? Automatic Analysis of User’s Social Cues during Mediated Communication
Magalie Ochs, Nathan Libermann, Axel Boidin, and Thierry Chaminade
|
15:15 |
Virtual Debate Coach Design: Assessing Multimodal Argumentation Performance
Volha Petukhova, Tobias Mayer, Andrei Malchanau, and Harry Bunt
|
15:40 |
Data Augmentation of Wearable Sensor Data for Parkinson’s Disease Monitoring using Convolutional Neural Networks (Short paper)
Terry Taewoong Um, Franz Michael Josef Pfister, Daniel Pichler, Satoshi Endo, Muriel Lang, Sandra Hirche, Urban Fietzek, and Dana Kulić
|
16:00 |
Coffee Break
|
16:15 - 16:30 |
Grand Challenge Overview
Room: Grosvenor Suite
Session Chair: Abhinav Dhall (Indian Institute of Technology Ropar)
|
16:30 - 17:30 |
ICMI Town Hall Meeting
Room: Grosvenor Suite
Session Chair: Louis-Philippe Morency (CMU)
|
17:30 - 18:30 |
Presentation of ICMI 2018 & Closing Comments
Room: Grosvenor Suite
General Chairs: Alessandro Vinciarelli and Edward Lank
|
Friday, 17 November 2017
Location: Sir Alwyn Williams Building, University of Glasgow
09:00 - 12:30 |
Tutorial: Multimodal Machine Learning
Dr. Louis-Philippe Morency
Room: Level 5
|
09:00 - 17:00 |
Grand Challenge: Emotion Recognition in the Wild Challenge 2017
Organisers: Abhinav Dhall, Roland Goecke, Jyoti Joshi, Jesse Hoey and Tom Gedeon
Room: 422/423
|
10:15 |
Coffee Break
|
12:30 |
Lunch
|
13:30 |
University of Glasgow Tour (Group 1)
|
13:45 |
University of Glasgow Tour (Group 2)
|
|
|
|
|