Doctoral Consortium

The goal of the ICMI Doctoral Consortium is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing multimodal interfaces. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing multimodal interfaces. The Consortium will be held on November 9th, 2015. We will provide economic support to all student participants that will cover part of their costs (travel, registration, meals etc.).

Accepted Papers

Please contact the Doctoral Consortium chairs, Carlos Busso (busso@utdallas.edu) or Vidhyasaharan Sethu (v.sethu@unsw.edu.au) for questions about the information

Title

 Author

 University

Temporal Association Rules for modelling multimodal social signals

Janssoone, Thomas

UPMC, France

Detecting and Synthesizing Synchronous Joint Action in Human-Robot Teams

Iqbal, Tariq**

University of Notre Dame, USA

Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos

Zadeh, Amir

Carnegie Mellon University, USA

Attention and Engagement Aware Multimodal Conversational Systems

Yu, Zhou

Carnegie Mellon University, USA

Implicit Human-computer Interaction: Two Complementary Approaches*

Wache, Julia

University of Trento, Italy

Instantaneous and Robust Eye-Activity Based Task Analysis

Wong, Hoe Kin

University of New South Wales, Australia

Challenges in Deep Learning for Multimodal Applications

Ghosh, Sayan

USC Institute for Creative Technologies, USA

Exploring Intent-driven Multimodal Interface for Geographical Information System

Sun, Feng

College of Information Sciences and Technology, USA

Software Techniques for Multimodal Input Processing in Realtime Interactive Systems

Fischbach, Martin

University of Würzburg, Germany

Gait and Postural Sway Analysis, A Multi-Modal System

Ismail, Hafsa

University of Canberra, Australia

A Computational Model of Culture-Specific Emotions for Artificial Agents in the Learning Domain

Naidu, Ganapreeta

Universiti Sains Malaysia, Malaysia

Record, Transform & Reproduce Social Encounters in Immersive VR: An Iterative Approach

Kolkmeier, Jan

University of Twente, Netherlands

Multimodal Affect Detection in the Wild: Accuracy, Availability, and Generalizability

Bosch, Nigel

University of Notre Dame, USA

Multimodal assessment of Teaching Behavior in Immersive Rehearsal Environment - TeachLivE

Barmaki, Roghayeh

University of Central Florida, USA

Talks

Each student will have 25 minutes to present including Q&A. Because the goal of the Doctoral Consortium us to receive feedback from mentors, rather than present completed research, we recommend that students limit their talks to ~15 minutes and use the remainder of the time for receiving feedback from the mentors. Students should plan to use their own computers to present. The room will have a projector. The tight schedule would not allow for demos or extensive depth on any point; students should provide a high-level overview of their research topic, plan, and progress. It is acceptable to include recent work in your talk (i.e., work that you might have completed since you submitted your abstracts), but we expect that you don’t deviate too much from the research plan to presented in your respective proposals.

Posters

All students are required to prepare a poster to present at the main conference (time TBA). Authors will have available a poster board of size [TBA]. The posters do not need to follow a particular template.

Mentors

TBA


Doctoral Consortium Chairs

For further questions, contact the Doctoral Consortium co-chairs:

  • Carlos Busso (University of Texas at Dallas, USA
  • Vidhyasaharan Sethu (University of New South Wales, Australia

ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2025