Challenges



Recognition of Social Touch Gestures Challenge 2015

Touch is one of the important non-verbal forms of social interaction, where it is used to communicate emotions and social messages. Automatic recognition of social touch is necessary to apply interaction in the tactile modality from to areas such as Human-Robot Interaction (HRI). If a robot can understand social touch behavior, the robot can respond accordingly - resulting in richer and more natural interaction.

In this challenge the focus is on the recognition of different hand touch gestures for which two data sets will be made available, each with labeled pressure/ location data collected from similar matrix-type sensor grids under conditions reflecting different application orientations. (1) CoST: Corpus of Social Touch (Jung et al., 2014) includes 14 relevant touch gestures such as stroke, poke and hit performed in three variations: normal, gentle and rough. Challenge: gentle (2601 captures) and normal (2602 captures) variations of the 14 gestures used. (2) HAART: The Human-Animal Affective Robot Touch gesture set (Flagg and MacLean, 2013) includes 7 touch gestures which humans use to communicate emotion in animal interaction (Yohanan and MacLean, 2012), with a range of covering, substrate and curvature. Challenge: one covering, substrate and distortion used.

Participants can choose to work on one of the data sets or on both. The purpose of this challenge is to develop relevant features and classification methods for recognizing social touch gestures. Participants will share their innovative findings at ICMI 2015.

Challenge homepage: http://www.utwente.nl/touch-challenge-2015

Important Dates

Release of training data: April 13th, 2015
Release of test data (without labels): May 18th, 2015
Submission of test set labels: May 25th, 2015
Performance results and release of test set labels: June 1th, 2015
Submission of paper: August 1st, 2015
Notification of acceptance: August 20th, 2015
Camera-ready submissions: September 15th, 2015
Challenge workshop: November 9th, 2015

Organizers

Merel Jung (University of Twente, The Netherlands)
Mannes Poel (University of Twente, The Netherlands)
Karon MacLean (University of British Columbia, Canada)
Laura Cang (University of British Columbia, Canada)

Contact

touch.challenge@gmail.com

 


 

Emotion Recognition in the Wild Challenge 2015

The Third Emotion Recognition in the Wild (EmotiW) 2015 Grand Challenge consists of an all-day event with a focus on affective sensing in unconstrained conditions and an embedded audio-video based emotion classification challenge and an image based facial expression recognition, which mimic the real-world conditions. EmotiW 2015 extends the platform introduced in EmotiW 2013 & 2014 and this year a new image based facial expression recognition sub-challenge has been added.

Challenge website: http://cs.anu.edu.au/few/emotiw2015.html

Team registration: http://cs.anu.edu.au/few/TeamEntry.html

Timeline

Train and validation data available: 15th April 2015
Test data available: 30th June 2015
Last date for uploading the results: 15th July 2015
Paper submission deadline: 25th July 2015
Notification of acceptance: 25th August 2015
Cameraready: 15th September 2015

Organizers

Abhinav Dhall, University of Canberra/Australian National University
Roland Goecke, University of Canberra/Australian National University
Jyoti Joshi, University of Canberra
Tom Gedeon, Australian National University

 


 

Multimodal Learning and Analytics Grand Challenge 2015

Multimodality is an integral part of teaching and learning. Over the past few decades researchers have been designing, creating and analyzing novel environments that enable students to experience and demonstrate learning through a variety of modalities. The recent availability of low cost multimodal sensors, advances in artificial intelligence and improved techniques for large scale data analysis have enabled researchers and practitioners to push the boundaries on multimodal learning and multimodal learning analytics. In an effort to continue these developments, the 2015 Multimodal Learning and Analytics Grand Challenge will include a combined focus on the development of rich, multimodal learning environments, as well techniques for analyzing multimodal learning data.

In line with this objective, submissions are sought in two categories (1) Multimodal Capture of Learning Environments and (2) Multimodal Learning Applications. Participants are invited to submit to one or both categories. Guidelines and additional details can be found at the challenge website (listed below).

Challenge website: http://sigmla.org/mla2015/

Important Dates

Official website launch: April 1, 2015
Deadline to submit preliminary abstract: June 1, 2015
Feedback provided to participants who make preliminary submissions: June 15, 2015
Deadline for submitting grand challenge papers: July 30, 2015
Notification of acceptance: August. 15, 2015
Camera-ready papers due: September 15, 2015
Grand challenge event: November 2015

Organisers

Katherine Chiluiza, Escuela Superior Politécnica del Litoral
Joseph Grafsgaard, North Carolina State University
Xavier Ochoa, Escuela Superior Politécnica del Litoral
Marcelo Worsley, University of Southern California

Contact

multimodallearningandanalytics@gmail.com

 

ICMI 2015 ACM International Conference on Multimodal Interaction. 9-13th November 2015, Seattle, USA. Copyright © 2010-2025