Doctoral Consortium

The following submissions have been accepted for this year's doctorial consortium:

  • Social Signal Extraction from Egocentric Photo-Streams
    Maedeh Aghaei - University of Barcelona, Barcelona, Spain

  • Hybrid Models for Opinion Analysis in Speech Interactions
    Valentin Barriere - Telecom ParisTech, Paris, France

  • Towards a Computational Model for First Impressions Generation
    Beatrice Biancardi - CNRS-ISIR, UPMC, Paris, France

  • Towards Edible Interfaces: Designing Interactions with Food
    Tom Gayler - Lancaster University, Lancaster, United Kingdom

  • Evaluating Engagement in Digital Narratives from Facial Data
    Rui Huan - University of Glasgow, Glasgow, United Kingdom

  • Grounded Language Learning for Collaborative Robots using Multimodal Cues
    Dimosthenis Kontogiorgos - KTH Royal Institute of Technology, Stockholm, Sweden

  • A Decentralised Multimodal Integration of Social Signals: A Bio-Inspired Approach
    Esma Mansouri-Benssassi - University of St Andrews, St Andrews, Fife, Scotland

  • Towards Designing Speech Technology based Assistive Interfaces for Children’s Speech Therapy
    Revathy Nayar - University of Strathaclyde, Glasgow, United Kingdom

  • Human-Centered Recognition of Children’s Touchscreen Gestures
    Alex Shaw - University of Florida, Gainesville, Florida, United States

  • Cross-Modality Interaction Between EEG Signals and Facial Expression
    Soheil Rayatdoost - Swiss Center for Affective Sciences - University of Geneva, Geneva, Switzerland

  • Immersive Virtual Eating and Conditioned Food Responses
    Nikita Mae Tuanquin - University of Canterbury, Christchurch, Canterbury, New Zealand

  • Social Robots for Motivation & Engagement in Therapy
    Katie Winkle - Bristol Robotics Laboratory, Bristol, United Kingdom


ICMI 2017 ACM International Conference on Multimodal Interaction. Copyright © 2017-2017