ICMI '15- Proceedings of the 2015 ACM on International Conference on Multimodal Interaction

ICMI '15- Proceedings of the 2015 ACM on International Conference on Multimodal Interaction

Full Citation in the ACM Digital Library

SESSION: Keynote Address 1

  • Zhengyou Zhang

Sharing Representations for Long Tail Computer Vision Problems

  • Samy Bengio

SESSION: Keynote Address 2

  • Phil Cohen

Interaction Studies with Social Robots

  • Kerstin Dautenhahn

SESSION: Keynote Address 3 (Sustained Accomplishment Award Talk)

  • Daniel Gatica-Perez

Connections: 2015 ICMI Sustained Accomplishment Award Lecture

  • Eric Horvitz

SESSION: Oral Session 1: Machine Learning in Multimodal Systems

  • Radu Horaud

Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker Traits

  • Moitreya Chatterjee
  • Sunghyun Park
  • Louis-Philippe Morency
  • Stefan Scherer

Personality Trait Classification via Co-Occurrent Multiparty Multimodal Event Discovery

  • Shogo Okada
  • Oya Aran
  • Daniel Gatica-Perez

Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring

  • Vikram Ramanarayanan
  • Chee Wee Leong
  • Lei Chen
  • Gary Feng
  • David Suendermann-Oeft

Gender Representation in Cinematic Content: A Multimodal Approach

  • Tanaya Guha
  • Che-Wei Huang
  • Naveen Kumar
  • Yan Zhu
  • Shrikanth S. Narayanan

SESSION: Oral Session 2: Audio-Visual, Multimodal Inference

  • Yukiko I. Nakano

Effects of Good Speaking Techniques on Audience Engagement

  • Keith Curtis
  • Gareth J.F. Jones
  • Nick Campbell

Multimodal Public Speaking Performance Assessment

  • Torsten Wörtwein
  • Mathieu Chollet
  • Boris Schauerte
  • Louis-Philippe Morency
  • Rainer Stiefelhagen
  • Stefan Scherer

I Would Hire You in a Minute: Thin Slices of Nonverbal Behavior in Job Interviews

  • Laurent Son Nguyen
  • Daniel Gatica-Perez

Deception Detection using Real-life Trial Data

  • Verónica Pérez-Rosas
  • Mohamed Abouelenien
  • Rada Mihalcea
  • Mihai Burzo

SESSION: Oral Session 3: Language, Speech and Dialog

  • Jill F. Lehman

Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects

  • Gabriel Skantze
  • Martin Johansson
  • Jonas Beskow

Visual Saliency and Crowdsourcing-based Priors for an In-car Situated Dialog System

  • Teruhisa Misu

Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language Understanding

  • Yun-Nung Chen
  • Ming Sun
  • Alexander I. Rudnicky
  • Anatole Gershman

Who's Speaking?: Audio-Supervised Classification of Active Speakers in Video

  • Punarjay Chakravarty
  • Sayeh Mirzaei
  • Tinne Tuytelaars
  • Hugo Van hamme

SESSION: Oral Session 4: Communication Dynamics

  • Louis-Philippe Morency

Predicting Participation Styles using Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning

  • Yukiko I. Nakano
  • Sakiko Nihonyanagi
  • Yutaka Takase
  • Yuki Hayashi
  • Shogo Okada

Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings

  • Ryo Ishii
  • Shiro Kumano
  • Kazuhiro Otsuka

Deciphering the Silent Participant: On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group Discussions

  • Catharine Oertel
  • Kenneth A. Funes Mora
  • Joakim Gustafson
  • Jean-Marc Odobez

Retrieving Target Gestures Toward Speech Driven Animation with Meaningful Behaviors

  • Najmeh Sadoughi
  • Carlos Busso

SESSION: Oral Session 5: Interaction Techniques

  • Sharon Oviatt

Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input

  • Konstantin Klamka
  • Andreas Siegel
  • Stefan Vogt
  • Fabian Göbel
  • Sophie Stellmach
  • Raimund Dachselt

Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions

  • Ishan Chatterjee
  • Robert Xiao
  • Chris Harrison

Digital Flavor: Towards Digitally Simulating Virtual Flavors

  • Nimesha Ranasinghe
  • Gajan Suthokumar
  • Kuan-Yi Lee
  • Ellen Yi-Luen Do

Different Strokes and Different Folks: Economical Dynamic Surface Sensing and Affect-Related Touch Recognition

  • Xi Laura Cang
  • Paul Bucci
  • Andrew Strang
  • Jeff Allen
  • Karon MacLean
  • H.Y. Sean Liu

SESSION: Oral Session 6: Mobile and Wearable

  • Michael Johnston

MPHA: A Personal Hearing Doctor Based on Mobile Devices

  • Yuhao Wu
  • Jia Jia
  • WaiKim Leung
  • Yejun Liu
  • Lianhong Cai

Towards Attentive, Bi-directional MOOC Learning on Mobile Devices

  • Xiang Xiao
  • Jingtao Wang

An Experiment on the Feasibility of Spatial Acquisition using a Moving Auditory Cue for Pedestrian Navigation

  • Yeseul Park
  • Kyle Koh
  • Heonjin Park
  • Jinwook Seo

A Wearable Multimodal Interface for Exploring Urban Points of Interest

  • Antti Jylhä
  • Yi-Ta Hsieh
  • Valeria Orso
  • Salvatore Andolina
  • Luciano Gamberini
  • Giulio Jacucci

POSTER SESSION: Poster Session

  • Radu Horaud
  • Dan Bohus

ECA Control using a Single Affective User Dimension

  • Fred Charles
  • Florian Pecune
  • Gabor Aranyi
  • Catherine Pelachaud
  • Marc Cavazza

Multimodal Interaction with a Bifocal View on Mobile Devices

  • Sebastien Pelurson
  • Laurence Nigay

NaLMC: A Database on Non-acted and Acted Emotional Sequences in HCI

  • Kim Hartmann
  • Julia Krüger
  • Jörg Frommer
  • Andreas Wendemuth

Exploiting Multimodal Affect and Semantics to Identify Politically Persuasive Web Videos

  • Behjat Siddiquie
  • Dave Chisholm
  • Ajay Divakaran

Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children

  • Samer Al Moubayed
  • Jill Lehman

Gestimator: Shape and Stroke Similarity Based Gesture Recognition

  • Yina Ye
  • Petteri Nurmi

Classification of Children's Social Dominance in Group Interactions with Robots

  • Sarah Strohkorb
  • Iolanda Leite
  • Natalie Warren
  • Brian Scassellati

Spectators' Synchronization Detection based on Manifold Representation of Physiological Signals: Application to Movie Highlights Detection

  • Michal Muszynski
  • Theodoros Kostoulas
  • Guillaume Chanel
  • Patrizia Lombardo
  • Thierry Pun

Implicit User-centric Personality Recognition Based on Physiological Responses to Emotional Videos

  • Julia Wache
  • Ramanathan Subramanian
  • Mojtaba Khomami Abadi
  • Radu-Laurentiu Vieriu
  • Nicu Sebe
  • Stefan Winkler

Detecting Mastication: A Wearable Approach

  • Abdelkareem Bedri
  • Apoorva Verlekar
  • Edison Thomaz
  • Valerie Avva
  • Thad Starner

Exploring Behavior Representation for Learning Analytics

  • Marcelo Worsley
  • Stefan Scherer
  • Louis-Philippe Morency
  • Paulo Blikstein

Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic Workcells

  • Alina Roitberg
  • Nikhil Somani
  • Alexander Perzylo
  • Markus Rickert
  • Alois Knoll

Accuracy vs. Availability Heuristic in Multimodal Affect Detection in the Wild

  • Nigel Bosch
  • Huili Chen
  • Sidney D'Mello
  • Ryan Baker
  • Valerie Shute

Dynamic Active Learning Based on Agreement and Applied to Emotion Recognition in Spoken Interactions

  • Yue Zhang
  • Eduardo Coutinho
  • Zixing Zhang
  • Caijiao Quan
  • Bjoern Schuller

Sharing Touch Interfaces: Proximity-Sensitive Touch Targets for Tablet-Mediated Collaboration

  • Ilhan Aslan
  • Thomas Meneweger
  • Verena Fuchsberger
  • Manfred Tscheligi

Analyzing Multimodality of Video for User Engagement Assessment

  • Fahim A. Salim
  • Fasih Haider
  • Owen Conlan
  • Saturnino Luz
  • Nick Campbell

Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement Unit

  • Asif Iqbal
  • Carlos Busso
  • Nicholas R. Gans

Automatic Detection of Mind Wandering During Reading Using Gaze and Physiology

  • Robert Bixler
  • Nathaniel Blanchard
  • Luke Garrison
  • Sidney D'Mello

Multimodal Detection of Depression in Clinical Interviews

  • Hamdi Dibeklioğlu
  • Zakia Hammal
  • Ying Yang
  • Jeffrey F. Cohn

Spoken Interruptions Signal Productive Problem Solving and Domain Expertise in Mathematics

  • Sharon Oviatt
  • Kevin Hang
  • Jianlong Zhou
  • Fang Chen

Active Haptic Feedback for Touch Enabled TV Remote

  • Anton Treskunov
  • Mike Darnell
  • Rongrong Wang

A Visual Analytics Approach to Finding Factors Improving Automatic Speaker Identifications

  • Pierrick Bruneau
  • Mickaël Stefas
  • Hervé Bredin
  • Johann Poignant
  • Thomas Tamisier
  • Claude Barras

The Influence of Visual Cues on Passive Tactile Sensations in a Multimodal Immersive Virtual Environment

  • Nina Rosa
  • Wolfgang Hürst
  • Wouter Vos
  • Peter Werkhoven

Detection of Deception in the Mafia Party Game

  • Sergey Demyanov
  • James Bailey
  • Kotagiri Ramamohanarao
  • Christopher Leckie

Individuality-Preserving Voice Reconstruction for Articulation Disorders Using Text-to-Speech Synthesis

  • Reina Ueda
  • Tetsuya Takiguchi
  • Yasuo Ariki

Behavioral and Emotional Spoken Cues Related to Mental States in Human-Robot Social Interaction

  • Lucile Bechade
  • Guillaume Dubuisson Duplessis
  • Mohamed Sehili
  • Laurence Devillers

Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View

  • Sven Bambach
  • David J. Crandall
  • Chen Yu

A Multimodal System for Real-Time Action Instruction in Motor Skill Learning

  • Iwan de Kok
  • Julian Hough
  • Felix Hülsmann
  • Mario Botsch
  • David Schlangen
  • Stefan Kopp

DEMONSTRATION SESSION: Demonstrations

  • Stefan Scherer

The Application of Word Processor UI paradigms to Audio and Animation Editing

  • Andre D. Milota

CuddleBits: Friendly, Low-cost Furballs that Respond to Touch

  • Laura Cang
  • Paul Bucci
  • Karon E. MacLean

Public Speaking Training with a Multimodal Interactive Virtual Audience Framework

  • Mathieu Chollet
  • Kalin Stefanov
  • Helmut Prendinger
  • Stefan Scherer

A Multimodal System for Public Speaking with Real Time Feedback

  • Fiona Dermody
  • Alistair Sutherland

Model of Personality-Based, Nonverbal Behavior in Affective Virtual Humanoid Character

  • Maryam Saberi
  • Ulysses Bernardet
  • Steve DiPaola

AttentiveLearner: Adaptive Mobile MOOC Learning via Implicit Cognitive States Inference

  • Xiang Xiao
  • Phuong Pham
  • Jingtao Wang

Interactive Web-based Image Sonification for the Blind

  • Torsten Wörtwein
  • Boris Schauerte
  • Karin E. Müller
  • Rainer Stiefelhagen

Nakama: A Companion for Non-verbal Affective Communication

  • Christian J.A.M. Willemse
  • Gerald M. Munters
  • Jan B.F. van Erp
  • Dirk Heylen

Wir im Kiez: Multimodal App for Mutual Help Among Elderly Neighbours

  • Sven Schmeier
  • Aaron Ruß
  • Norbert Reithinger

Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant

  • Ethan Selfridge
  • Michael Johnston

The UTEP AGENT System

  • David Novick
  • Iván Gris Sepulveda
  • Diego A. Rivera
  • Adriana Camacho
  • Alex Rayon
  • Mario Gutierrez

A Distributed Architecture for Interacting with NAO

  • Fabien Badeig
  • Quentin Pelorson
  • Soraya Arias
  • Vincent Drouard
  • Israel Gebru
  • Xiaofei Li
  • Georgios Evangelidis
  • Radu Horaud

SESSION: Grand Challenge 1: Recognition of Social Touch Gestures Challenge 2015

Touch Challenge '15: Recognizing Social Touch Gestures

  • Merel M. Jung
  • Xi Laura Cang
  • Mannes Poel
  • Karon E. MacLean

The Grenoble System for the Social Touch Challenge at ICMI 2015

  • Viet-Cuong Ta
  • Wafa Johal
  • Maxime Portaz
  • Eric Castelli
  • Dominique Vaufreydaz

Social Touch Gesture Recognition using Random Forest and Boosting on Distinct Feature Sets

  • Yona Falinie A. Gaus
  • Temitayo Olugbade
  • Asim Jan
  • Rui Qin
  • Jingxin Liu
  • Fan Zhang
  • Hongying Meng
  • Nadia Bianchi-Berthouze

Recognizing Touch Gestures for Social Human-Robot Interaction

  • Tugce Balli Altuglu
  • Kerem Altun

Detecting and Identifying Tactile Gestures using Deep Autoencoders, Geometric Moments and Gesture Level Features

  • Dana Hughes
  • Nicholas Farrow
  • Halley Profita
  • Nikolaus Correll

SESSION: Grand Challenge 2: Emotion Recognition in the Wild Challenge 2015

Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015

  • Abhinav Dhall
  • O.V. Ramana Murthy
  • Roland Goecke
  • Jyoti Joshi
  • Tom Gedeon

Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition

  • Bo-Kyeong Kim
  • Hwaran Lee
  • Jihyeon Roh
  • Soo-Young Lee

Image based Static Facial Expression Recognition with Multiple Deep Network Learning

  • Zhiding Yu
  • Cha Zhang

Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning

  • Hong-Wei Ng
  • Viet Dung Nguyen
  • Vassilios Vonikakis
  • Stefan Winkler

Capturing AU-Aware Facial Features and Their Latent Relations for Emotion Recognition in the Wild

  • Anbang Yao
  • Junchao Shao
  • Ningning Ma
  • Yurong Chen

Contrasting and Combining Least Squares Based Learners for Emotion Recognition in the Wild

  • Heysem Kaya
  • Furkan Gürpinar
  • Sadaf Afshar
  • Albert Ali Salah

Recurrent Neural Networks for Emotion Recognition in Video

  • Samira Ebrahimi Kahou
  • Vincent Michalski
  • Kishore Konda
  • Roland Memisevic
  • Christopher Pal

Multiple Models Fusion for Emotion Recognition in the Wild

  • Jianlong Wu
  • Zhouchen Lin
  • Hongbin Zha

A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition

  • Wei Li
  • Farnaz Abtahi
  • Zhigang Zhu

Transductive Transfer LDA with Riesz-based Volume LBP for Emotion Recognition in The Wild

  • Yuan Zong
  • Wenming Zheng
  • Xiaohua Huang
  • Jingwei Yan
  • Tong Zhang

Combining Multimodal Features within a Fusion Network for Emotion Recognition in the Wild

  • Bo Sun
  • Liandong Li
  • Guoyan Zhou
  • Xuewen Wu
  • Jun He
  • Lejun Yu
  • Dongxue Li
  • Qinglan Wei

Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns

  • Gil Levi
  • Tal Hassner

Quantification of Cinematography Semiotics for Video-based Facial Emotion Recognition in the EmotiW 2015 Grand Challenge

  • Albert C. Cruz

Affect Recognition using Key Frame Selection based on Minimum Sparse Reconstruction

  • Mehmet Kayaoglu
  • Cigdem Eroglu Erdem

SESSION: Grand Challenge 3: Multimodal Learning and Analytics Grand Challenge 2015

2015 Multimodal Learning and Analytics Grand Challenge

  • Marcelo Worsley
  • Katherine Chiluiza
  • Joseph F. Grafsgaard
  • Xavier Ochoa

Providing Real-time Feedback for Student Teachers in a Virtual Rehearsal Environment

  • Roghayeh Barmaki
  • Charles E. Hughes

Presentation Trainer, your Public Speaking Multimodal Coach

  • Jan Schneider
  • Dirk Börner
  • Peter van Rosmalen
  • Marcus Specht

Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits

  • Chee Wee Leong
  • Lei Chen
  • Gary Feng
  • Chong Min Lee
  • Matthew Mulholland

Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live Classrooms

  • Sidney K. D'Mello
  • Andrew M. Olney
  • Nathan Blanchard
  • Borhan Samei
  • Xiaoyi Sun
  • Brooke Ward
  • Sean Kelly

Multimodal Selfies: Designing a Multimodal Recording Device for Students in Traditional Classrooms

  • Federico Domínguez
  • Katherine Chiluiza
  • Vanessa Echeverria
  • Xavier Ochoa

SESSION: Doctoral Consortium

  • Carlos Busso

Temporal Association Rules for Modelling Multimodal Social Signals

  • Thomas Janssoone

Detecting and Synthesizing Synchronous Joint Action in Human-Robot Teams

  • Tariq Iqbal
  • Laurel D. Riek

Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos

  • Amir Zadeh

Attention and Engagement Aware Multimodal Conversational Systems

  • Zhou Yu

Implicit Human-computer Interaction: Two Complementary Approaches

  • Julia Wache

Instantaneous and Robust Eye-Activity Based Task Analysis

  • Hoe Kin Wong

Challenges in Deep Learning for Multimodal Applications

  • Sayan Ghosh

Exploring Intent-driven Multimodal Interface for Geographical Information System

  • Feng Sun

Software Techniques for Multimodal Input Processing in Realtime Interactive Systems

  • Martin Fischbach

Gait and Postural Sway Analysis, A Multi-Modal System

  • Hafsa Ismail

A Computational Model of Culture-Specific Emotion Detection for Artificial Agents in the Learning Domain

  • Ganapreeta R. Naidu

Record, Transform & Reproduce Social Encounters in Immersive VR: An Iterative Approach

  • Jan Kolkmeier

Multimodal Affect Detection in the Wild: Accuracy, Availability, and Generalizability

  • Nigel Bosch

Multimodal Assessment of Teaching Behavior in Immersive Rehearsal Environment-TeachLivE

  • Roghayeh Barmaki

ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2024