ICMI 2016- Proceedings of the 18th ACM International Conference on Multimodal Interaction

ICMI 2016- Proceedings of the 18th ACM International Conference on Multimodal Interaction

ICMI 2016- Proceedings of the 18th ACM International Conference on Multimodal Interaction

Full Citation in the ACM Digital Library

SESSION: Invited Talks

Understanding people by tracking their word use (keynote)

  • James W. Pennebaker

Learning to generate images and their descriptions (keynote)

  • Richard Zemel

Embodied media: expanding human capacity via virtual reality and telexistence (keynote)

  • Susumu Tachi

Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk)

  • Wolfgang Wahlster

SESSION: Oral Session 1: Multimodal Social Agents

Trust me: multimodal signals of trustworthiness

  • Gale Lucas
  • Giota Stratou
  • Shari Lieblich
  • Jonathan Gratch

Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction

  • Iolanda Leite
  • André Pereira
  • Allison Funkhouser
  • Boyang Li
  • Jill Fain Lehman

Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens

  • Catharine Oertel
  • José Lopes
  • Yu Yu
  • Kenneth A. Funes Mora
  • Joakim Gustafson
  • Alan W. Black
  • Jean-Marc Odobez

Sequence-based multimodal behavior modeling for social agents

  • Soumia Dermouche
  • Catherine Pelachaud

SESSION: Oral Session 2: Physiological and Tactile Modalities

Adaptive review for mobile MOOC learning via implicit physiological signal sensing

  • Phuong Pham
  • Jingtao Wang

Visuotactile integration for depth perception in augmented reality

  • Nina Rosa
  • Wolfgang Hürst
  • Peter Werkhoven
  • Remco Veltkamp

Exploring multimodal biosignal features for stress detection during indoor mobility

  • Kyriaki Kalimeri
  • Charalampos Saitis

An IDE for multimodal controls in smart buildings

  • Sebastian Peters
  • Jan Ole Johanssen
  • Bernd Bruegge

SESSION: Poster Session 1

Personalized unknown word detection in non-native language reading using eye gaze

  • Rui Hiraoka
  • Hiroki Tanaka
  • Sakriani Sakti
  • Graham Neubig
  • Satoshi Nakamura

Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspired

  • Daniel McDuff

Do speech features for detecting cognitive load depend on specific languages?

  • Rui Chen
  • Tiantian Xie
  • Yingtao Xie
  • Tao Lin
  • Ningjiu Tang

Training on the job: behavioral analysis of job interviews in hospitality

  • Skanda Muralidhar
  • Laurent Son Nguyen
  • Denise Frauendorfer
  • Jean-Marc Odobez
  • Marianne Schmid Mast
  • Daniel Gatica-Perez

Emotion spotting: discovering regions of evidence in audio-visual emotion expressions

  • Yelin Kim
  • Emily Mower Provost

Semi-supervised model personalization for improved detection of learner's emotional engagement

  • Nese Alyuz
  • Eda Okur
  • Ece Oktay
  • Utku Genc
  • Sinem Aslan
  • Sinem Emine Mete
  • Bert Arnrich
  • Asli Arslan Esme

Driving maneuver prediction using car sensor and driver physiological signals

  • Nanxiang Li
  • Teruhisa Misu
  • Ashish Tawari
  • Alexandre Miranda
  • Chihiro Suga
  • Kikuo Fujimura

On leveraging crowdsourced data for automatic perceived stress detection

  • Jonathan Aigrain
  • Arnaud Dapogny
  • Kévin Bailly
  • Séverine Dubuisson
  • Marcin Detyniecki
  • Mohamed Chetouani

Investigating the impact of automated transcripts on non-native speakers' listening comprehension

  • Xun Cao
  • Naomi Yamashita
  • Toru Ishida

Speaker impact on audience comprehension for academic presentations

  • Keith Curtis
  • Gareth J. F. Jones
  • Nick Campbell

EmoReact: a multimodal approach and dataset for recognizing emotional responses in children

  • Behnaz Nojavanasghari
  • Tadas Baltrušaitis
  • Charles E. Hughes
  • Louis-Philippe Morency

Bimanual input for multiscale navigation with pressure and touch gestures

  • Sebastien Pelurson
  • Laurence Nigay

Intervention-free selection using EEG and eye tracking

  • Felix Putze
  • Johannes Popp
  • Jutta Hild
  • Jürgen Beyerer
  • Tanja Schultz

Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm

  • Lei Chen
  • Gary Feng
  • Chee Wee Leong
  • Blair Lehman
  • Michelle Martin-Raugh
  • Harrison Kell
  • Chong Min Lee
  • Su-Youn Yoon

Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets

  • Shogo Okada
  • Yoshihiko Ohtake
  • Yukiko I. Nakano
  • Yuki Hayashi
  • Hung-Hsuan Huang
  • Yutaka Takase
  • Katsumi Nitta

Multi-sensor modeling of teacher instructional segments in live classrooms

  • Patrick J. Donnelly
  • Nathaniel Blanchard
  • Borhan Samei
  • Andrew M. Olney
  • Xiaoyi Sun
  • Brooke Ward
  • Sean Kelly
  • Martin Nystrand
  • Sidney K. D'Mello

SESSION: Oral Session 3: Groups, Teams, and Meetings

Meeting extracts for discussion summarization based on multimodal nonverbal information

  • Fumio Nihei
  • Yukiko I. Nakano
  • Yutaka Takase

Getting to know you: a multimodal investigation of team behavior and resilience to stress

  • Catherine Neubauer
  • Joshua Woolley
  • Peter Khooshabeh
  • Stefan Scherer

Measuring the impact of multimodal behavioural feedback loops on social interactions

  • Ionut Damian
  • Tobias Baur
  • Elisabeth André

Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings

  • Ryo Ishii
  • Shiro Kumano
  • Kazuhiro Otsuka

SESSION: Oral Session 4: Personality and Emotion

Automatic recognition of self-reported and perceived emotion: does joint modeling help?

  • Biqiao Zhang
  • Georg Essl
  • Emily Mower Provost

Personality classification and behaviour interpretation: an approach based on feature categories

  • Sheng Fang
  • Catherine Achard
  • Séverine Dubuisson

Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech

  • Xinzhou Xu
  • Jun Deng
  • Maryna Gavryukova
  • Zixing Zhang
  • Li Zhao
  • Björn Schuller

Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios

  • Laura Cabrera-Quiros
  • Ekin Gedik
  • Hayley Hung

SESSION: Poster Session 2

Deep learning driven hypergraph representation for image-based emotion recognition

  • Yuchi Huang
  • Hanqing Lu

Towards a listening agent: a system generating audiovisual laughs and smiles to show interest

  • Kevin El Haddad
  • Hüseyin Çakmak
  • Emer Gilmartin
  • Stéphane Dupont
  • Thierry Dutoit

Sound emblems for affective multimodal output of a robotic tutor: a perception study

  • Helen Hastie
  • Pasquale Dente
  • Dennis Küster
  • Arvid Kappas

Automatic detection of very early stage of dementia through multimodal interaction with computer avatars

  • Hiroki Tanaka
  • Hiroyoshi Adachi
  • Norimichi Ukita
  • Takashi Kudo
  • Satoshi Nakamura

MobileSSI: asynchronous fusion for social signal interpretation in the wild

  • Simon Flutura
  • Johannes Wagner
  • Florian Lingenfelser
  • Andreas Seiderer
  • Elisabeth André

Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language

  • Yue Zhang
  • Felix Weninger
  • Anton Batliner
  • Florian Hönig
  • Björn Schuller

Training deep networks for facial expression recognition with crowd-sourced label distribution

  • Emad Barsoum
  • Cha Zhang
  • Cristian Canton Ferrer
  • Zhengyou Zhang

Deep multimodal fusion for persuasiveness prediction

  • Behnaz Nojavanasghari
  • Deepak Gopinath
  • Jayanth Koushik
  • Tadas Baltrušaitis
  • Louis-Philippe Morency

Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turns

  • Oleg Špakov
  • Poika Isokoski
  • Jari Kangas
  • Jussi Rantala
  • Deepak Akkil
  • Roope Raisamo

Effects of multimodal cues on children's perception of uncanniness in a social robot

  • Maike Paetzel
  • Christopher Peters
  • Ingela Nyström
  • Ginevra Castellano

Multimodal feedback for finger-based interaction in mobile augmented reality

  • Wolfgang Hürst
  • Kevin Vriens

Smooth eye movement interaction using EOG glasses

  • Murtaza Dhuliawala
  • Juyoung Lee
  • Junichi Shimizu
  • Andreas Bulling
  • Kai Kunze
  • Thad Starner
  • Woontack Woo

Active speaker detection with audio-visual co-training

  • Punarjay Chakravarty
  • Jeroen Zegers
  • Tinne Tuytelaars
  • Hugo Van hamme

Detecting emergent leader in a meeting environment using nonverbal visual features only

  • Cigdem Beyan
  • Nicolò Carissimi
  • Francesca Capozzi
  • Sebastiano Vascon
  • Matteo Bustreo
  • Antonio Pierro
  • Cristina Becchio
  • Vittorio Murino

Stressful first impressions in job interviews

  • Ailbhe N. Finnerty
  • Skanda Muralidhar
  • Laurent Son Nguyen
  • Fabio Pianesi
  • Daniel Gatica-Perez

SESSION: Oral Session 5: Gesture, Touch, and Haptics

Analyzing the articulation features of children's touchscreen gestures

  • Alex Shaw
  • Lisa Anthony

Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual reality

  • Imtiaj Ahmed
  • Ville Harjunen
  • Giulio Jacucci
  • Eve Hoggan
  • Niklas Ravaja
  • Michiel M. Spapé

Using touchscreen interaction data to predict cognitive workload

  • Philipp Mock
  • Peter Gerjets
  • Maike Tibus
  • Ulrich Trautwein
  • Korbinian Möller
  • Wolfgang Rosenstiel

Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniques

  • Adrien Arnaud
  • Jean-Baptiste Corrégé
  • Céline Clavel
  • Michèle Gouiffès
  • Mehdi Ammi

SESSION: Oral Session 6: Skill Training and Assessment

Understanding the impact of personal feedback on face-to-face interactions in the workplace

  • Afra Mashhadi
  • Akhil Mathur
  • Marc Van den Broeck
  • Geert Vanderhulst
  • Fahim Kawsar

Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study

  • Sowmya Rasipuram
  • Pooja Rao S. B.
  • Dinesh Babu Jayagopi

Context and cognitive state triggered interventions for mobile MOOC learning

  • Xiang Xiao
  • Jingtao Wang

Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills training

  • Mathieu Chollet
  • Helmut Prendinger
  • Stefan Scherer

SESSION: Demo Session 1

Social signal processing for dummies

  • Ionut Damian
  • Michael Dietz
  • Frank Gaibler
  • Elisabeth André

Metering "black holes": networking stand-alone applications for distributed multimodal synchronization

  • Michael Cohen
  • Yousuke Nagayama
  • Bektur Ryskeldiev

Towards a multimodal adaptive lighting system for visually impaired children

  • Euan Freeman
  • Graham Wilson
  • Stephen Brewster

Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals

  • Graham Wilson
  • Euan Freeman
  • Stephen Brewster

Niki and Julie: a robot and virtual human for studying multimodal social interaction

  • Ron Artstein
  • David Traum
  • Jill Boberg
  • Alesia Gainer
  • Jonathan Gratch
  • Emmanuel Johnson
  • Anton Leuski
  • Mikio Nakano

A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission

  • Helen Hastie
  • Xingkun Liu
  • Pedro Patron

Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization

  • Simon Flutura
  • Johannes Wagner
  • Florian Lingenfelser
  • Andreas Seiderer
  • Elisabeth André

SESSION: Demo Session 2

Multimodal system for public speaking with real time feedback: a positive computing perspective

  • Fiona Dermody
  • Alistair Sutherland

Multimodal biofeedback system integrating low-cost easy sensing devices

  • Wataru Hashiguchi
  • Junya Morita
  • Takatsugu Hirayama
  • Kenji Mase
  • Kazunori Yamada
  • Mayu Yokoya

A telepresence system using a flexible textile display

  • Kana Kushida
  • Hideyuki Nakanishi

Large-scale multimodal movie dialogue corpus

  • Ryu Yasuhara
  • Masashi Inoue
  • Ikuya Suga
  • Tetsuo Kosaka

Immersive virtual reality with multimodal interaction and streaming technology

  • Wan-Lun Tsai
  • You-Lun Hsu
  • Chi-Po Lin
  • Chen-Yu Zhu
  • Yu-Cheng Chen
  • Min-Chun Hu

Multimodal interaction with the autonomous Android ERICA

  • Divesh Lala
  • Pierrick Milhorat
  • Koji Inoue
  • Tianyu Zhao
  • Tatsuya Kawahara

Ask Alice: an artificial retrieval of information agent

  • Michel Valstar
  • Tobias Baur
  • Angelo Cafaro
  • Alexandru Ghitulescu
  • Blaise Potard
  • Johannes Wagner
  • Elisabeth André
  • Laurent Durieu
  • Matthew Aylett
  • Soumia Dermouche
  • Catherine Pelachaud
  • Eduardo Coutinho
  • Björn Schuller
  • Yue Zhang
  • Dirk Heylen
  • Mariët Theune
  • Jelte van Waterschoot

Design of multimodal instructional tutoring agents using augmented reality and smart learning objects

  • Anmol Srivastava
  • Pradeep Yammiyavar

AttentiveVideo: quantifying emotional responses to mobile video advertisements

  • Phuong Pham
  • Jingtao Wang

Young Merlin: an embodied conversational agent in virtual reality

  • Ivan Gris
  • Diego A. Rivera
  • Alex Rayon
  • Adriana Camacho
  • David Novick

SESSION: EmotiW Challenge

EmotiW 2016: video and group-level emotion recognition challenges

  • Abhinav Dhall
  • Roland Goecke
  • Jyoti Joshi
  • Jesse Hoey
  • Tom Gedeon

Emotion recognition in the wild from videos using images

  • Sarah Adel Bargal
  • Emad Barsoum
  • Cristian Canton Ferrer
  • Cha Zhang

A deep look into group happiness prediction from images

  • Aleksandra Cerekovic

Video-based emotion recognition using CNN-RNN and C3D hybrid networks

  • Yin Fan
  • Xiangju Lu
  • Dian Li
  • Yuanliu Liu

LSTM for dynamic emotion and group emotion recognition in the wild

  • Bo Sun
  • Qinglan Wei
  • Liandong Li
  • Qihua Xu
  • Jun He
  • Lejun Yu

Multi-clue fusion for emotion recognition in the wild

  • Jingwei Yan
  • Wenming Zheng
  • Zhen Cui
  • Chuangao Tang
  • Tong Zhang
  • Yuan Zong
  • Ning Sun

Multi-view common space learning for emotion recognition in the wild

  • Jianlong Wu
  • Zhouchen Lin
  • Hongbin Zha

HoloNet: towards robust emotion recognition in the wild

  • Anbang Yao
  • Dongqi Cai
  • Ping Hu
  • Shandong Wang
  • Liang Sha
  • Yurong Chen

Group happiness assessment using geometric features and dataset balancing

  • Vassilios Vonikakis
  • Yasin Yazici
  • Viet Dung Nguyen
  • Stefan Winkler

Happiness level prediction with sequential inputs via multiple regressions

  • Jianshu Li
  • Sujoy Roy
  • Jiashi Feng
  • Terence Sim

Video emotion recognition in the wild based on fusion of multimodal features

  • Shizhe Chen
  • Xinrui Li
  • Qin Jin
  • Shilei Zhang
  • Yong Qin

Wild wild emotion: a multimodal ensemble approach

  • John Gideon
  • Biqiao Zhang
  • Zakaria Aldeneh
  • Yelin Kim
  • Soheil Khorram
  • Duc Le
  • Emily Mower Provost

Audio and face video emotion recognition in the wild using deep neural networks and small datasets

  • Wan Ding
  • Mingyu Xu
  • Dongyan Huang
  • Weisi Lin
  • Minghui Dong
  • Xinguo Yu
  • Haizhou Li

Automatic emotion recognition in the wild using an ensemble of static and dynamic representations

  • Mostafa Mehdipour Ghazi
  • Hazım Kemal Ekenel

SESSION: Doctoral Consortium

The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans

  • Maike Paetzel

Viewing support system for multi-view videos

  • Xueting Wang

Engaging children with autism in a shape perception task using a haptic force feedback interface

  • Alix Pérusseau-Lambert

Modeling user's decision process through gaze behavior

  • Kei Shimonishi

Multimodal positive computing system for public speaking with real-time feedback

  • Fiona Dermody

Prediction/Assessment of communication skill using multimodal cues in social interactions

  • Sowmya Rasipuram

Player/Avatar body relations in multimodal augmented reality games

  • Nina Rosa

Computational model for interpersonal attitude expression

  • Soumia Dermouche

Assessing symptoms of excessive SNS usage based on user behavior and emotion

  • Ploypailin Intapong
  • Tipporn Laohakangvalvit
  • Tiranee Achalakul
  • Michiko Ohkura

Kawaii feeling estimation by product attributes and biological signals

  • Tipporn Laohakangvalvit
  • Tiranee Achalakul
  • Michiko Ohkura

Multimodal sensing of affect intensity

  • Shalini Bhatia

Enriching student learning experience using augmented reality and smart learning objects

  • Anmol Srivastava

Automated recognition of facial expressions authenticity

  • Krystian Radlak
  • Bogdan Smolka

Improving the generalizability of emotion recognition systems: towards emotion recognition in the wild

  • Biqiao Zhang

SESSION: Grand Challenge Summary

Emotion recognition in the wild challenge 2016

  • Abhinav Dhall
  • Roland Goecke
  • Jyoti Joshi
  • Tom Gedeon

SESSION: Workshop Summaries

1st international workshop on embodied interaction with smart environments (workshop summary)

  • Patrick Holthaus
  • Thomas Hermann
  • Sebastian Wrede
  • Sven Wachsmuth
  • Britta Wrede

ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary)

  • Khiet P. Truong
  • Dirk Heylen
  • Toyoaki Nishida
  • Mohamed Chetouani

ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary)

  • Kim Hartmann
  • Ingo Siegert
  • Ali Albert Salah
  • Khiet P. Truong

International workshop on multimodal virtual and augmented reality (workshop summary)

  • Wolfgang Hürst
  • Daisuke Iwai
  • Prabhakaran Balakrishnan

International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary)

  • Mohamed Chetouani
  • Salvatore M. Anzalone
  • Giovanna Varni
  • Isabelle Hupont Torres
  • Ginevra Castellano
  • Angelica Lim
  • Gentiane Venture

1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary)

  • Anton Nijholt
  • Carlos Velasco
  • Kasun Karunanayaka
  • Gijs Huisman

International workshop on multimodal analyses enabling artificial agents in human-­machine interaction (workshop summary)

  • Ronald Böck
  • Francesca Bonin
  • Nick Campbell
  • Ronald Poppe

ICMI 2016 ACM International Conference on Multimodal Interaction. Copyright © 2015-2024