ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction

MLA '14- Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge

MAPTRAITS '14- Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop

MMRWHRI '14- Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction

UM3I '14- Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions

RFMIR '14- Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges

ERM4HCI '14- Proceedings of the 2014 workshop on Emotion Recognition in the Wild Challenge and Workshop

GazeIn '14- Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality







ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction

ICMI '14- Proceedings of the 16th International Conference on Multimodal Interaction

Full Citation in the ACM Digital Library

SESSION: Keynote Address

  • Albert Ali Salah

Bursting our Digital Bubbles: Life Beyond the App

  • Yvonne Rogers

SESSION: Oral Session 1: Dialogue and Social Interaction

  • Toyoaki Nishida

Managing Human-Robot Engagement with Forecasts and... um... Hesitations

  • Dan Bohus
  • Eric Horvitz

Written Activity, Representations and Fluency as Predictors of Domain Expertise in Mathematics

  • Sharon Oviatt
  • Adrienne Cohen

Analysis of Respiration for Prediction of "Who Will Be Next Speaker and When?" in Multi-Party Meetings

  • Ryo Ishii
  • Kazuhiro Otsuka
  • Shiro Kumano
  • Junji Yamato

A Multimodal In-Car Dialogue System That Tracks The Driver's Attention

  • Spyros Kousidis
  • Casey Kennington
  • Timo Baumann
  • Hendrik Buschmeier
  • Stefan Kopp
  • David Schlangen

SESSION: Oral Session 2: Multimodal Fusion

  • Björn Schuller

Deep Multimodal Fusion: Combining Discrete Events and Continuous Signals

  • Héctor P. Martínez
  • Georgios N. Yannakakis

The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring

  • Joseph F. Grafsgaard
  • Joseph B. Wiggins
  • Alexandria Katarina Vail
  • Kristy Elizabeth Boyer
  • Eric N. Wiebe
  • James C. Lester

Computational Analysis of Persuasiveness in Social Multimedia: A Novel Dataset and Multimodal Prediction Approach

  • Sunghyun Park
  • Han Suk Shim
  • Moitreya Chatterjee
  • Kenji Sagae
  • Louis-Philippe Morency

Deception detection using a multimodal approach

  • Mohamed Abouelenien
  • Veronica Pérez-Rosas
  • Rada Mihalcea
  • Mihai Burzo

DEMONSTRATION SESSION: Demo Session 1

  • Kazuhiro Otsuka
  • Lale Akarun

Multimodal Interaction for Future Control Centers: An Interactive Demonstrator

  • Ferdinand Fuhrmann
  • Rene Kaiser

Emotional Charades

  • Stefano Piana
  • Alessandra Staglianò
  • Francesca Odone
  • Antonio Camurri

Glass Shooter: Exploring First-Person Shooter Game Control with Google Glass

  • Chun-Yen Hsu
  • Ying-Chao Tung
  • Han-Yu Wang
  • Silvia Chyou
  • Jer-Wei Lin
  • Mike Y. Chen

Orchestration for Group Videoconferencing: An Interactive Demonstrator

  • Wolfgang Weiss
  • Rene Kaiser
  • Manolis Falelakis

Integrating Remote PPG in Facial Expression Analysis Framework

  • H. Emrah Tasli
  • Amogh Gudi
  • Marten Den Uyl

Context-Aware Multimodal Robotic Health Assistant

  • Vidyavisal Mangipudi
  • Raj Tumuluri

WebSanyog: A Portable Assistive Web Browser for People with Cerebral Palsy

  • Tirthankar Dasgupta
  • Manjira Sinha
  • Gagan Kandra
  • Anupam Basu

The hybrid Agent MARCO

  • Nicolas Riesterer
  • Christian Becker Asano
  • Julien Hué
  • Christian Dornhege
  • Bernhard Nebel

Towards Supporting Non-linear Navigation in Educational Videos

  • Kuldeep Yadav
  • Kundan Shrivastava
  • Om Deshmukh

POSTER SESSION: Poster Session 1

  • Oya Aran
  • Louis-Philippe Morency

Detecting conversing groups with a single worn accelerometer

  • Hayley Hung
  • Gwenn Englebienne
  • Laura Cabrera Quiros

Identification of the Driver's Interest Point using a Head Pose Trajectory for Situated Dialog Systems

  • Young-Ho Kim
  • Teruhisa Misu

An Explorative Study on Crossmodal Congruence Between Visual and Tactile Icons Based on Emotional Responses

  • Taekbeom Yoo
  • Yongjae Yoo
  • Seungmoon Choi

Why We Watch the News: A Dataset for Exploring Sentiment in Broadcast Video News

  • Joseph G. Ellis
  • Brendan Jou
  • Shih-Fu Chang

Dyadic Behavior Analysis in Depression Severity Assessment Interviews

  • Stefan Scherer
  • Zakia Hammal
  • Ying Yang
  • Louis-Philippe Morency
  • Jeffrey F. Cohn

Touching the Void -- Introducing CoST: Corpus of Social Touch

  • Merel M. Jung
  • Ronald Poppe
  • Mannes Poel
  • Dirk K.J. Heylen

Unsupervised Domain Adaptation for Personalized Facial Emotion Recognition

  • Gloria Zen
  • Enver Sangineto
  • Elisa Ricci
  • Nicu Sebe

Predicting Influential Statements in Group Discussions using Speech and Head Motion Information

  • Fumio Nihei
  • Yukiko I. Nakano
  • Yuki Hayashi
  • Hung-Hsuan Hung
  • Shogo Okada

The Relation of Eye Gaze and Face Pose: Potential Impact on Speech Recognition

  • Malcolm Slaney
  • Andreas Stolcke
  • Dilek Hakkani-Tür

Speech-Driven Animation Constrained by Appropriate Discourse Functions

  • Najmeh Sadoughi
  • Yang Liu
  • Carlos Busso

Many Fingers Make Light Work: Non-Visual Capacitive Surface Exploration

  • Martin Halvey
  • Andy Crossan

Multimodal Interaction History and its use in Error Detection and Recovery

  • Felix Schüssel
  • Frank Honold
  • Miriam Schmidt
  • Nikola Bubalo
  • Anke Huckauf
  • Michael Weber

Gesture Heatmaps: Understanding Gesture Performance with Colorful Visualizations

  • Radu-Daniel Vatavu
  • Lisa Anthony
  • Jacob O. Wobbrock

Personal Aesthetics for Soft Biometrics: A Generative Multi-resolution Approach

  • Cristina Segalin
  • Alessandro Perina
  • Marco Cristani

Synchronising Physiological and Behavioural Sensors in a Driving Simulator

  • Ronnie Taib
  • Benjamin Itzstein
  • Kun Yu

Data-Driven Model of Nonverbal Behavior for Socially Assistive Human-Robot Interactions

  • Henny Admoni
  • Brian Scassellati

Towards Automated Assessment of Public Speaking Skills Using Multimodal Cues

  • Lei Chen
  • Gary Feng
  • Jilliam Joe
  • Chee Wee Leong
  • Christopher Kitchen
  • Chong Min Lee

Increasing Customers' Attention using Implicit and Explicit Interaction in Urban Advertisement

  • Matthias Wölfel
  • Luigi Bucchino

System for Presenting and Creating Smell Effects to Video

  • Risa Suzuki
  • Shutaro Homma
  • Eri Matsuura
  • Ken-ichi Okada

CrossMotion: Fusing Device and Image Motion for User Identification, Tracking and Device Association

  • Andrew D. Wilson
  • Hrvoje Benko

Statistical Analysis of Personality and Identity in Chats Using a Keylogging Platform

  • Giorgio Roffo
  • Cinzia Giorgetta
  • Roberta Ferrario
  • Walter Riviera
  • Marco Cristani

Understanding Users' Perceived Difficulty of Multi-Touch Gesture Articulation

  • Yosra Rekik
  • Radu-Daniel Vatavu
  • Laurent Grisoni

A Multimodal Context-based Approach for Distress Assessment

  • Sayan Ghosh
  • Moitreya Chatterjee
  • Louis-Philippe Morency

Exploring a Model of Gaze for Grounding in Multimodal HRI

  • Gregor Mehlmann
  • Markus Häring
  • Kathrin Janowski
  • Tobias Baur
  • Patrick Gebhard
  • Elisabeth André

Predicting Learning and Engagement in Tutorial Dialogue: A Personality-Based Model

  • Alexandria Katarina Vail
  • Joseph F. Grafsgaard
  • Joseph B. Wiggins
  • James C. Lester
  • Kristy Elizabeth Boyer

Eye Gaze for Spoken Language Understanding in Multi-modal Conversational Interactions

  • Dilek Hakkani-Tür
  • Malcolm Slaney
  • Asli Celikyilmaz
  • Larry Heck

SoundFLEX: Designing Audio to Guide Interactions with Shape-Retaining Deformable Interfaces

  • Koray Tahiroğlu
  • Thomas Svedström
  • Valtteri Wikström
  • Simon Overstall
  • Johan Kildal
  • Teemu Ahmaniemi

Investigating Intrusiveness of Workload Adaptation

  • Felix Putze
  • Tanja Schultz

SESSION: Keynote Address 2

  • Louis-Philippe Morency

Smart Multimodal Interaction through Big Data

  • Cafer Tosun

SESSION: Oral Session 3: Affect and Cognitive Modeling

  • Sharon Oviatt

Natural Communication about Uncertainties in Situated Interaction

  • Tomislav Pejsa
  • Dan Bohus
  • Michael F. Cohen
  • Chit W. Saw
  • James Mahoney
  • Eric Horvitz

The SWELL Knowledge Work Dataset for Stress and User Modeling Research

  • Saskia Koldijk
  • Maya Sappelli
  • Suzan Verberne
  • Mark A. Neerincx
  • Wessel Kraaij

Rhythmic Body Movements of Laughter

  • Radoslaw Niewiadomski
  • Maurizio Mancini
  • Yu Ding
  • Catherine Pelachaud
  • Gualtiero Volpe

Automatic Blinking Detection towards Stress Discovery

  • Alvaro Marcos-Ramiro
  • Daniel Pizarro-Perez
  • Marta Marron-Romera
  • Daniel Pizarro-Perez
  • Daniel Gatica-Perez

SESSION: Oral Session 4: Nonverbal Behaviors

  • Bülent Sankur

Mid-air Authentication Gestures: An Exploration of Authentication Based on Palm and Finger Motions

  • Ilhan Aslan
  • Andreas Uhl
  • Alexander Meschtscherjakov
  • Manfred Tscheligi

Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors

  • Marwa M. Mahmoud
  • Tadas Baltrušaitis
  • Peter Robinson

Capturing Upper Body Motion in Conversation: An Appearance Quasi-Invariant Approach

  • Alvaro Marcos-Ramiro
  • Daniel Pizarro-Perez
  • Marta Marron-Romera
  • Daniel Gatica-Perez

User Independent Gaze Estimation by Exploiting Similarity Measures in the Eye Pair Appearance Eigenspace

  • Nanxiang Li
  • Carlos Busso

SESSION: Doctoral Spotlight Session

  • Marco Cristani

Exploring multimodality for translator-computer interaction

  • Julián Zapata

Towards Social Touch Intelligence: Developing a Robust System for Automatic Touch Recognition

  • Merel M. Jung

Facial Expression Analysis for Estimating Pain in Clinical Settings

  • Karan Sikka

Realizing Robust Human-Robot Interaction under Real Environments with Noises

  • Takaaki Sugiyama

Speaker- and Corpus-Independent Methods for Affect Classification in Computational Paralinguistics

  • Heysem Kaya

The Impact of Changing Communication Practices

  • Ailbhe N. Finnerty

Multi-Resident Human Behaviour Identification in Ambient Assisted Living Environments

  • Hande Alemdar

Gaze-Based Proactive User Interface for Pen-Based Systems

  • Çağla Çığ

Appearance based user-independent gaze estimation

  • Nanxiang Li

Affective Analysis of Abstract Paintings Using Statistical Analysis and Art Theory

  • Andreza Sartori

The Secret Language of Our Body: Affect and Personality Recognition Using Physiological Signals

  • Julia Wache

Perceptions of Interpersonal Behavior are Influenced by Gender, Facial Expression Intensity, and Head Pose

  • Jeffrey M. Girard

Authoring Communicative Behaviors for Situated, Embodied Characters

  • Tomislav Pejsa

Multimodal Analysis and Modeling of Nonverbal Behaviors during Tutoring

  • Joseph F. Grafsgaard

SESSION: Keynote Address 3

  • Oya Aran

Computation of Emotions

  • Peter Robinson

SESSION: Oral Session 5: Mobile and Urban Interaction

  • Metin Sezgin

Non-Visual Navigation Using Combined Audio Music and Haptic Cues

  • Emily Fujimoto
  • Matthew Turk

Tactile Feedback for Above-Device Gesture Interfaces: Adding Touch to Touchless Interactions

  • Euan Freeman
  • Stephen Brewster
  • Vuokko Lantz

Once Upon a Crime: Towards Crime Prediction from Demographics and Mobile Data

  • Andrey Bogomolov
  • Bruno Lepri
  • Jacopo Staiano
  • Nuria Oliver
  • Fabio Pianesi
  • Alex Pentland

Impact of Coordinate Systems on 3D Manipulations in Mobile Augmented Reality

  • Philipp Tiefenbacher
  • Steven Wichert
  • Daniel Merget
  • Gerhard Rigoll

SESSION: Oral Session 6: Healthcare and Assistive Technologies

  • Dan Bohus

Digital Reading Support for The Blind by Multimodal Interaction

  • Yasmine N. El-Glaly
  • Francis Quek

Measuring Child Visual Attention using Markerless Head Tracking from Color and Depth Sensing Cameras

  • Jonathan Bidwell
  • Irfan A. Essa
  • Agata Rozga
  • Gregory D. Abowd

Bi-Modal Detection of Painful Reaching for Chronic Pain Rehabilitation Systems

  • Temitayo A. Olugbade
  • M.S. Hane Aung
  • Nadia Bianchi-Berthouze
  • Nicolai Marquardt
  • Amanda C. Williams

SESSION: Keynote Address 4

  • Jeffrey Cohn

A World without Barriers: Connecting the World across Languages, Distances and Media

  • Alexander Waibel

SESSION: The Second Emotion Recognition In The Wild Challenge

Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol

  • Abhinav Dhall
  • Roland Goecke
  • Jyoti Joshi
  • Karan Sikka
  • Tom Gedeon

Neural Networks for Emotion Recognition in the Wild

  • Michał Grosicki

Emotion Recognition in the Wild: Incorporating Voice and Lip Activity in Multimodal Decision-Level Fusion

  • Fabien Ringeval
  • Shahin Amiriparian
  • Florian Eyben
  • Klaus Scherer
  • Björn Schuller

Combining Multimodal Features with Hierarchical Classifier Fusion for Emotion Recognition in the Wild

  • Bo Sun
  • Liandong Li
  • Tian Zuo
  • Ying Chen
  • Guoyan Zhou
  • Xuewen Wu

Combining Modality-Specific Extreme Learning Machines for Emotion Recognition in the Wild

  • Heysem Kaya
  • Albert Ali Salah

Combining Multiple Kernel Methods on Riemannian Manifold for Emotion Recognition in the Wild

  • Mengyi Liu
  • Ruiping Wang
  • Shaoxin Li
  • Shiguang Shan
  • Zhiwu Huang
  • Xilin Chen

Enhanced Autocorrelation in Real World Emotion Recognition

  • Sascha Meudt
  • Friedhelm Schwenker

Emotion Recognition in the Wild with Feature Fusion and Multiple Kernel Learning

  • JunKai Chen
  • Zenghai Chen
  • Zheru Chi
  • Hong Fu

Improved Spatiotemporal Local Monogenic Binary Pattern for Emotion Recognition in The Wild

  • Xiaohua Huang
  • Qiuhai He
  • Xiaopeng Hong
  • Guoying Zhao
  • Matti Pietikainen

Emotion Recognition in Real-world Conditions with Acoustic and Visual Features

  • Maxim Sidorov
  • Wolfgang Minker

SESSION: Workshop Overviews

ERM4HCI 2014: The 2nd Workshop on Emotion Representation and Modelling in Human-Computer-Interaction-Systems

  • Kim Hartmann
  • Björn Schuller
  • Ronald Böck

Gaze-in 2014: the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction

  • Hung-Hsuan Huang
  • Roman Bednarik
  • Kristiina Jokinen
  • Yukiko I. Nakano

MAPTRAITS 2014 - The First Audio/Visual Mapping Personality Traits Challenge - An Introduction: Perceived Personality and Social Dimensions

  • Oya Celiktutan
  • Florian Eyben
  • Evangelos Sariyanidi
  • Hatice Gunes
  • Björn Schuller

MLA'14: Third Multimodal Learning Analytics Workshop and Grand Challenges

  • Xavier Ochoa
  • Marcelo Worsley
  • Katherine Chiluiza
  • Saturnino Luz

ICMI 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction

  • Mary Ellen Foster
  • Manuel Giuliani
  • Ronald Petrick

An Outline of Opportunities for Multimodal Research

  • Dirk Heylen
  • Alessandro Vinciarelli

UM3I 2014: International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions

  • Samer Al Moubayed
  • Dan Bohus
  • Anna Esposito
  • Dirk Heylen
  • Maria Koutsombogera
  • Harris Papageorgiou
  • Gabriel Skantze
MLA '14- Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge

MLA '14- Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge

Full Citation in the ACM Digital Library

WORKSHOP SESSION: Workshop Session

  • Xavier A. Ochoa

Multimodal Learning Analytics as a Tool for Bridging Learning Theory and Complex Learning Behaviors

  • Marcelo Worsley

Acoustic-Prosodic Entrainment and Rapport in Collaborative Learning Dialogues

  • Nichola Lubold
  • Heather Pon-Barry

Holistic Analysis of the Classroom

  • Mirko Raca
  • Pierre Dillenbourg

Deciphering the Practices and Affordances of Different Reasoning Strategies through Multimodal Learning Analytics

  • Marcelo Worsley
  • Paulo Blikstein

SESSION: Math Data Corpus Challenge

  • Katherine M. Chiluiza

Combining empirical and machine learning techniques to predict math expertise using pen signal features

  • Jianlong Zhou
  • Kevin Hang
  • Sharon Oviatt
  • Kun Yu
  • Fang Chen

SESSION: Oral Presentation Quality Challenge

  • Marcelo Worsley

Estimation of Presentations Skills Based on Slides and Audio Features

  • Gonzalo Luzardo
  • Bruno Guamán
  • Katherine Chiluiza
  • Jaime Castells
  • Xavier Ochoa

Using Multimodal Cues to Analyze MLA'14 Oral Presentation Quality Corpus: Presentation Delivery and Slides Quality

  • Lei Chen
  • Chee Wee Leong
  • Gary Feng
  • Chong Min Lee

Presentation Skills Estimation Based on Video and Kinect Data Analysis

  • Vanessa Echeverría
  • Allan Avendaño
  • Katherine Chiluiza
  • Aníbal Vásquez
  • Xavier Ochoa
MAPTRAITS '14- Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop

MAPTRAITS '14- Proceedings of the 2014 Workshop on Mapping Personality Traits Challenge and Workshop

Full Citation in the ACM Digital Library

SESSION: Keynote Talk

Personality Computing: How Machines Can Deal With Personality Traits

  • Alessandro Vinciarelli

SESSION: Paper Presentations

MAPTRAITS 2014: The First Audio/Visual Mapping Personality Traits Challenge

  • Oya Celiktutan
  • Florian Eyben
  • Evangelos Sariyanidi
  • Hatice Gunes
  • Björn Schuller

Automatic Recognition of Personality Traits: A Multimodal Approach

  • Maxim Sidorov
  • Stefan Ultes
  • Alexander Schmitt

Continuous Mapping of Personality Traits: A Novel Challenge and Failure Conditions

  • Heysem Kaya
  • Albert Ali Salah

Acoustic Gait-based Person Identification using Hidden Markov Models

  • Jürgen T. Geiger
  • Maximilian Kneißl
  • Björn W. Schuller
  • Gerhard Rigoll
MMRWHRI '14- Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction

MMRWHRI '14- Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction

Full Citation in the ACM Digital Library

SESSION: Regular Papers

  • Mary Ellen Foster

Towards Closed Feedback Loops in HRI: Integrating InproTK and PaMini

  • Birte Carlmeyer
  • David Schlangen
  • Britta Wrede

Attention Detection in Elderly People-Robot Spoken Interaction

  • Mohamed El Amine SEHILI
  • Fan Yang
  • Laurence Devillers

SESSION: Regular Paper

  • Manuel Giuliani

Advances in Wikipedia-based Interaction with Robots

  • Graham Wilcock
  • Kristiina Jokinen

SESSION: Late-Breaking Papers

  • Ronald Petrick

Self-calibration of an Assistive Device to Adapt to Different Users and Environments

  • Andrés Trujillo-León
  • Fernando Vidal-Verdú

Towards proactive robot behavior based on incremental language analysis

  • Suna Bensch
  • Thomas Hellström

Selection of an Object Requested by Speech Based on Generic Object Recognition

  • Hitoshi Nishimura
  • Yuko Ozasa
  • Yasuo Ariki
  • Mikio Nakano

Clarification Dialogues for Perception-based Errors in Situated Human-Computer Dialogues

  • Niels Schütte
  • John D. Kelleher
  • Brian Mac Namee

Applying Topic Recognition to Spoken Language in Human-Robot Interaction Dialogues

  • Manuel Giuliani
  • Thomas Marschall
  • Manfred Tscheligi

Applying Semantic Web Services to Multi-Robot Coordination

  • Yuhei Ogawa
  • Yuichiro Mori
  • Takahira Yamaguchi

Affective Feedback for a Virtual Robot in a Real-World Treasure Hunt

  • Mary Ellen Foster
  • Mei Yii Lim
  • Amol Deshmukh
  • Srini Janarthanam
  • Helen Hastie
  • Ruth Aylett
UM3I '14- Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions

UM3I '14- Proceedings of the 2014 workshop on Understanding and Modeling Multiparty, Multimodal Interactions

Full Citation in the ACM Digital Library

SESSION: Keynote Address

  • Harris Papageorgiou

From Modeling Multimodal and Multiparty Interactions to Designing Conversational Agents

  • Yukiko I. Nakano

SESSION: Paper Session 1

  • Gabriel Skantze

Context in Affective Multiparty and Multimodal Interaction: Why, Which, How and Where?

  • Aggeliki Vlachostergiou
  • George Caridakis
  • Stefanos Kollias

SESSION: Paper Session 2

  • Dan Bohus

Effect of nonverbal behavioral patterns on the performance of small groups

  • Umut Avci
  • Oya Aran

Robo fashion world: a multimodal corpus of multi-child human-computer interaction

  • Jill Fain Lehman

Comparison of Human-Human and Human-Robot Turn-Taking Behaviour in Multiparty Situated Interaction

  • Martin Johansson
  • Gabriel Skantze
  • Joakim Gustafson

SESSION: Paper Session 3

  • Dirk Heylen

Who Will Get the Grant?: A Multimodal Corpus for the Analysis of Conversational Behaviours in Group Interviews

  • Catharine Oertel
  • Kenneth A. Funes Mora
  • Samira Sheikhi
  • Jean-Marc Odobez
  • Joakim Gustafson

Eye Gaze Analyses in L1 and L2 Conversations: From the Perspective of Listeners' Eye Gaze Activity

  • Koki Ijuin
  • Keiko Taguchi
  • Ichiro Umata
  • Seiichi Yamamoto

SESSION: Paper Session 4

  • Samer Al Moubayed

Mitigating problems in video-mediated group discussions: Towards conversation aware video-conferencing systems

  • Marwin Schmitt
  • Simon Gunkel
  • Pablo Cesar
  • Dick Bulterman

Models for Decision Making in Video Mediated Communication

  • Wolfgang Weiss
  • Manolis Falelakis
  • Rene Kaiser
  • Marian F. Ursu
RFMIR '14- Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges

RFMIR '14- Proceedings of the 2014 Workshop on Roadmapping the Future of Multimodal Interaction Research including Business Opportunities and Challenges

Full Citation in the ACM Digital Library

SESSION: Roadmap for Data Collection

Multimodal Analytics and its Data Ecosystem

  • Maria Koutsombogera
  • Harris Papageorgiou

Topics for the Future: Genre Differentiation, Annotation, and Linguistic Content Integration in Interaction Analysis

  • Francesca Bonin
  • Emer Gilmartin
  • Carl Vogel
  • Nick Campbell

SESSION: Improving Analytical Methods

Role of Inter-Personal Synchrony in Extracting Social Signatures: Some Case Studies

  • Mohamed Chetouani

Towards Multimodal Pain Assessment for Research and Clinical Use

  • Zakia Hammal
  • Jeffrey F. Cohn

Intra- and Interpersonal Functions of Head Motion in Emotion Communication

  • Zakia Hammal
  • Jeffrey F. Cohn

SESSION: New Methods for Modeling

Statistical Pattern Recognition Meets Formal Ontologies: Towards a Semantic Visual Understanding

  • Marco Cristani
  • Roberta Ferrario

Cognitive Multimodal Processing: from Signal to Behavior

  • Alexandros Potamianos

SESSION: Interaction Roadmap

Challenges for Social Embodiment

  • Elisabeth André

ROCKIT: Roadmap for Conversational Interaction Technologies

  • Steve Renals
  • Jean Carletta
  • Keith Edwards
  • Hervé Bourlard
  • Phil Garner
  • Andrei Popescu-Belis
  • Dietrich Klakow
  • Andrey Girenko
  • Volha Petukova
  • Philippe Wacker
  • Andrew Joscelyne
  • Costis Kompis
  • Simon Aliwell
  • William Stevens
  • Youssef Sabbah

Natural Multimodal Interaction with a Social Robot: What are the Premises?

  • Albert A. Salah

SESSION: Applications and Business Opportunities

Multimodal Interaction for Future Control Centers: Interaction Concept and Implementation

  • Rene Kaiser
  • Ferdinand Fuhrmann

Towards healthcare personal agents

  • Giuseppe Riccardi

Automatic Behaviour Understanding in Medicine

  • Michel Valstar
ERM4HCI '14- Proceedings of the 2014 workshop on Emotion Recognition in the Wild Challenge and Workshop

ERM4HCI '14- Proceedings of the 2014 workshop on Emotion Recognition in the Wild Challenge and Workshop

Full Citation in the ACM Digital Library

SESSION: Emotion Detection

  • Kim Hartmann

An Initial Analysis of Structured Video Interviews by Using Multimodal Emotion Detection

  • Lei Chen
  • Su-Youn Yoon
  • Chee Wee Leong
  • Michelle Martin
  • Min Ma

A Neural Network Based Approach to Social Touch Classification

  • Siewart van Wingerden
  • Tobias J. Uebbing
  • Merel M. Jung
  • Mannes Poel

Emotion Expression and Conversation Assessment in First Acquaintance Dialogues

  • Patrizia Paggio

SESSION: Human-Machine Interaction

  • Ronald Böck

A Design Platform for Emotion-Aware User Interfaces

  • Eunjung Lee
  • Gyu-Wan Kim
  • Byung-Soo Kim
  • Mi-Ae Kang

A Model to Incorporate Emotional Sensitivity into Human Computer Interactions

  • Sweety Ramnani
  • Ravi Prakash Gorthi

Detection of Emotional Events utilizing Support Vector Methods in an Active Learning HCI Scenario

  • Patrick Thiam
  • Sascha Meudt
  • Markus Kächele
  • Günther Palm
  • Friedhelm Schwenker
GazeIn '14- Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality

GazeIn '14- Proceedings of the 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze & Multimodality

Full Citation in the ACM Digital Library

SESSION: Keynote Talk

Attention and Gaze in Situated Language Interaction

  • Dan Bohus

SESSION: Long Papers

Spatio-Temporal Event Selection in Basic Surveillance Tasks using Eye Tracking and EEG

  • Jutta Hild
  • Felix Putze
  • David Kaufman
  • Christian Kühnle
  • Tanja Schultz
  • Jürgen Beyerer

Gaze-Based Virtual Task Predictor

  • Çağla Çığ
  • Tevfik Metin Sezgin

Analysis of Timing Structure of Eye Contact in Turn-changing

  • Ryo Ishii
  • Kazuhiro Otsuka
  • Shiro Kumano
  • Junji Yamato

Fusing Multimodal Human Expert Data to Uncover Hidden Semantics

  • Xuan Guo
  • Qi Yu
  • Rui Li
  • Cecilia Ovesdotter Alm
  • Anne R. Haake

Evaluating the Impact of Embodied Conversational Agents (ECAs) Attentional Behaviors on User Retention of Cultural Content in a Simulated Mobile Environment

  • Ioannis Doumanis
  • Serengul Smith

Analyzing Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning

  • Sakiko Nihonyanagi
  • Yuki Hayashi
  • Yukiko I. Nakano

SESSION: Short Paper

Study on Participant-controlled Eye Tracker Calibration Procedure

  • Pawel Kasprowski
  • Katarzyna Harezlak

ICMI 2014 ACM International Conference on Multimodal Interaction. 12-16th November 2014, Istanbul, Turkey. Copyright © 2010-2025