Challenges
Developing systems that can robustly understand human-human
communication or respond to human input requires identifying the
best algorithms and their failure modes. In fields such as computer
vision, speech recognition, and computational linguistics, the
availability of datasets and common tasks have led to great
progress. We will invite the ICMI community to collectively
define and tackle scientific Grand Challenges in multimodal
interaction for the next 5 years.
Third Multimodal Learning Analytics Workshop and Grand Challenges
Advances in Learning Analytics are expected to contribute new empirical findings, theories, methods, and metrics for understanding how students learn. It also could contribute to improving pedagogical support for students’ learning through assessment of new digital tools, teaching strategies, and curricula.
The most recent direction within this area is Multimodal Learning Analytics, which emphasizes the analysis of natural rich modalities of communication during situated learning activities. This includes students’ speech, writing, and nonverbal interaction (e.g., gestures, facial expressions, gaze, etc.). A primary objective of multimodal learning analytics is to analyze coherent signal, activity, and lexical patterns to understand the learning process and feedback its participants in order to improve it. The Third International Workshop on Multimodal Learning Analytics will bring together international researchers in multimodal interaction and systems, cognitive and learning sciences, educational technologies, and related areas to advance research on multimodal learning analytics.
Following the First International Workshop on Multimodal Learning Analytics in Santa Monica in 2012 and the ICMI Grand Challenge on Multimodal Learning Analytics in Sydney in 2013, this third workshop will also incorporate two data-driven grand challenges. It will be held at ICMI 2014 in Istanbul, Turkey on November 12th 2014. This year, the workshop has been expanded to include a session for hand-on training on multimodal learning analytic techniques and two dataset-based challenges. Students and postdoctoral researchers are especially welcome to participate.
More information can be found in our website: http://www.sigmla.org/mla2014
Important Dates
March 24, 2014: Both datasets are made available to interested participants
July 1, 2014: Deadline for workshop papers
August 1, 2014: Deadline for grand challenge papers
August 21, 2014: Notification of acceptance
September 15, 2014: Camera-ready papers due
November 12, 2014: Workshop event
Participation Levels
Workshop
The workshop will focus on the presentation of multimodal signal analysis techniques that could be applied in Multimodal Learning Analytics. Instead of requiring research results, that usually are presented at the Learning Analytics and Knowledge (LAK) or Multidimodal Interaction (ICMI) conferences, this event will require presenters to concentrate on benefits and shortcomings of methods used for multimodal analysis of learning signals.
Grand Challenges
Following the successful experience of the Multimodal Learning Analytics Grand Challenge in ICMI 2013, this year this event will provide two data sets with diverse research questions to be tackled by interested participants:
The Math Data Corpus (Oviatt, 2013) is available for analysis. It involves 12 sessions, with small groups of three students collaborating while solving mathematics problems (i.e., geometry, algebra). Data were collected on their natural multimodal communication and activity patterns during these problem-solving and peer tutoring sessions, including students’ speech, digital pen input, facial expressions, and physical movements. In total, approximately 1518 hours of multimodal data is available during these situated problem-solving sessions.
- Presentation Quality Challenge:
This challenge includes a data corpus that involves 40 oral presentations of Spanish-speaking students in groups of 4 to 5 members presenting projects (entrepreneurship ideas, literature reviews, research designs, software design, etc.). Data were collected on their natural multimodal communication in regular classroom settings. The following data is available: speech, facial expressions and physical movements in video, skeletal data gathered from Kinect for each individual, and slide presentation files. In total, approximately 10 hours of multimodal data is available for analysis of these presentations.
Organization
Xavier Ochoa, ESPOL, Ecuador (xavier@cti.espol.edu.ec)
Marcelo Worsley, Stanford, USA (mworsley@stanford.edu)
Katherine Chiluiza, ESPOL, Ecuador (kchilui@espol.edu.ec)
Saturnino Luz, Trinity College Dublin, Ireland (luzs@cs.tcd.ie)
Contact
Xavier Ochoa (xavier@cti.espol.edu.ec)
The Second Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2014 Challenge
The Second Emotion Recognition In The Wild Challenge and Workshop (EmotiW) 2014 Challenge consists of an audio-video based emotion classification challenges, which mimics real-world conditions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such lab controlled data poorly represents the environment and conditions faced in real-world situations. With the increase in the number of video clips online, it is worthwhile to explore the performance of emotion recognition methods that work ‘in the wild’. The goal of this Grand Challenge is to extend and carry forward the new common platform for evaluation of emotion recognition methods in real-world conditions defined in EmotiW 2013 at ACM International Conference on Multimodal Interaction 2013.
The database in the 2014 challenge is the Acted Facial Expression In Wild (AFEW) 4.0, which has been collected from movies showing close-to-real-world conditions. Three sets for training, validation and testing will be made available. The challenge seeks participation from researchers working on emotion recognition intend to create, extend and validate their methods on data in real-world conditions.
Challenge homepage: http://cs.anu.edu.au/few/emotiw2014.html
Team registration: http://cs.anu.edu.au/few/TeamEntry.html
Timeline
Train and validate data available: 25th April 2014
Test data available: 4th July 2014
Last date for uploading the results: 25th July 2014
Paper submission deadline: 1st August 2014
Notification: 25th August 2014
Camera-ready papers: 15th September 2014
Organizers
Abhinav Dhall, Australian National University
Roland Goecke, University of Canberra/Australian National University
Jyoti Joshi, University of Canberra
Karan Sikka, University of California San Diego
Tom Gedeon, Australian National University
Contact
Emotiw2014@gmail.com
MAPTRAITS'14 - Personality Mapping Challenge and Workshop 2014
Organised in conjunction with ACM ICMI'14, 12 Nov. 2014, Istanbul/Turkey
The Personality Mapping Challenge & Workshop (MAPTRAITS) series is a competition event aimed at the comparison of signal processing and machine learning methods for automatic visual, vocal and/or audio-visual analysis of traits and social dimensions. MAPTRAITS'14 challenge aims to bring forth existing efforts and major accomplishments in modelling and analysis of personality and social traits in both discrete and continuous time and/or space, while focusing on current trends and pushing the state of the art in the field to new and novel future directions.
Organisers
Hatice Gunes, Queen Mary University of London, UK
Björn Schuller, Technische Universität München, Germany / Imperial College London, UK
Oya Celiktutan, Queen Mary University of London, UK
Evangelos Sariyanidi, Queen Mary University of London, UK
Florian Eyben, Technische Universität München, Germany
Important Dates
Challenge data released: 30 April, 2014
Baseline results: 30 May, 2014
Paper submission: 18 August, 2014
Notification: 1 September, 2014
Camera-ready submission: 15 September, 2014
Challenge and Workshop: 12 November, 2014
Data
The MAPTRAITS Quantized Dataset consists of audio-visual interaction clips of 11 different subjects. These clips have been assessed by 6 raters along the five dimensions of the BF model, namely, extraversion, agreeableness, conscientiousness, neuroticism, and openness, and the four additional dimensions of engagement, facial attractiveness, vocal attractiveness, and likability. The dimensions were scored on a Likert scale with ten possible values, from strongly disagree to strongly agree, mapped into the range from [1,10].
The MAPTRAITS Continuous Dataset has been created for continuous prediction of traits in time and in space. The raters used an annotation tool to view each clip and to continuously provide scores over time by scrolling a bar between 0 and 100. There are approximately 32-40 visual-only annotations per video for the five dimensions of BF model as well as engagement, likability and facial attractiveness, and 25 audio-visual annotations per video for agreeableness, conscientiousness, openness, engagement and vocal attractiveness dimensions.
Support
The event is partially supported by the MAPTRAITS Project funded by the Engineering and Physical Sciences Research Council UK (EPSRC) (Grant Ref: EP/K017500/1).
|