Demonstrations

The ICMI 2015 Demonstrations & Exhibits session is intended to provide a forum to showcase innovative implementations, systems and technologies demonstrating new ideas about interactive multimodal interfaces. They can also serve to introduce commercial products not published in previous scientific publications. Demonstrations & Exhibits should be short, so that they can be presented several times. We particularly encourage demonstration of interactive and multimodal analysis systems, and sensors. The main difference between a demonstration and an exhibit is that demonstrations include a 2-page paper, which will be included in ICMI proceedings. We encourage both the submission of early research prototypes and interesting mature systems. Proposals may be of two types: demonstrations and exhibits. In addition, authors of accepted regular research papers are invited to participate in the demonstration sessions as well.

Accepted Demo Papers

The Application of Word Processor UI paradigms to Audio and Animation Editing
Andre Milota

Real-time Gesture Recognition on an Economical Fabric Sensor

Xi Laura Cang, Paul Bucci, Karon MacLean

Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer

A Multimodal System for Public Speaking with Real Time Feedback

Fiona Dermody, Alistair Sutherland

Model of Personality-Based, Nonverbal Behavior in Affective Virtual Humanoid Character

Maryam Saberi, Ulysses Bernardet, Steve DiPaola

AttentiveLearner : Adaptive Mobile MOOC Learning via Implicit Cognitive States Inference

Xiang Xiao, Phuong Pham, Jingtao Wang

Interactive Web-based Image Sonification for the Blind
Torsten Wörtwein, Boris Schauerte, Karin Müller, Rainer Stiefelhagen

Nakama: A Companion for Non-verbal Affective Communication

Christian Willemse, Gerald Munters, Jan van Erp, Dirk Heylen

Wir im Kiez - Multimodal App for Mutual Help Among Elderly Neighbours
Sven Schmeier, Aaron Ruß, Norbert Reithinger

Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant

Ethan Selfridge, Michael Johnston

The UTEP AGENT Framework

David Novick, Ivan Gris Sepulveda, Diego Rivera, Adriana Camacho, Alex Rayon, Mario Gutierrez

A Distributed Architecture for Interacting with NAO

Fabien Badeig, Quentin Pelorson, Soraya Arias, Vincent Drouard, Israel Gebru, Xiaofei Li, Georgios Evangelidis, Radu Horaud

Who's Speaking? Audio Supervised Classification of Active Speakers in Video

Punarjay Chakravarty

Multimodal Interaction with a Bifocal View on Mobile Devices

Sébastien Pelurson

Digital Flavor: Towards Digitally Simulating Virtual Flavors

Nimesha Ranasinghe, Gajan Suthokumar, Kuan-Yi Lee, Ellen Yi-Luen Do

Detecting Mastication - A Wearable Approach

Abdelkareem Bedri, Apoorva Verlekar,Edison Thomaz, Valerie Avva, Thad Starner

Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement Unit

Asif Iqbal, Carlos Busso Recabarren, Nicholas R. Gans


ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2017