24th ACM International Conference on Multimodal Interaction
(7-11 Nov 2022)

Home

Awards

Call for Bids 2025

Program

Keynotes

Registration

Tutorial

Workshops

Grand Challenge

Presentation Instruction

Doctoral Consortium

Airmeet Instructions

Arrival and transit

Accommodation

Local Info

Trip to Mysuru

Late Breaking Results

Call for Sponsors

Call for Papers

Guidelines for Authors

Call for Demonstrations
and Exhibits

Call for Doctoral Consortium

Important Dates

People

Preconference Workshop

Call for Workshops

Guidelines for Reviewers

Camera-Ready Instructions

Conference Venue

Visa Process

About Bangalore

Platinum Sponsor

blank

Gold Sponsor

blank

Silver Sponsor

blank
blank

Bronze Sponsor

blank
blank

ICMI Pre-Conference Workshop (26th Feb 2022)

The online version of the Pre-Conference Workshop (PCW) of the ACM International Conference on Multimodal Interaction (ICMI) 2022, has several eminent speakers both from Academia and Industry covering several aspects of Multimodal Interaction. The event will happen online on the Airmeet platform and is sponsored by the Mphasis Cognitive Computing COE at IIIT Bangalore.

Event registration >>

Schedule

Speaker Time (IST) Title Session Chair
Dinesh (IIITB) 9:00 – 9:10 Opening remarks Raj(Openstream)
Phil Cohen (Openstream) 9:10 – 9:55 Platform for  Collaborative Multimodal Plan-Based Multiparty  Dialogue Systems Raj(Openstream)
Dan Bohus (Microsoft) 10:00 – 10:40 Platform for Situated Intelligence: an Open-Source Framework for Multimodal, Integrative AI Systems Raj(Openstream)
BREAK – 10:45 – 11:00
Pradipta Biswas (IISc) 11:00 – 11:40 Multimodal Intelligent User Interface for Cars Dinesh (IIITB)
Jainendra Shukla(IIITD) 11:45 – 12:25 Semi-Supervised Learning for Listener Backchannels in Hindi for Embodied Conversational Agents Dinesh (IIITB)
Sriparna Saha
(IIT Patna)
12:30 – 1:10 Multimodal Information Processing: Applications in Dialogue Systems and Summarization Dinesh (IIITB)
Anand Mishra(IIT Jodhpur) 2:00 – 2:40 More You Know, Better You Answer: Towards Knowledge-aware VQA models Abhinav(IIT Ropar)
Rajiv Ratn Shah(IIITD) 2:45 – 3:25 Harnessing Multimodal Data and AI for Accessibility Abhinav(IIT Ropar)
Akash James (NVIDIA Ambassador) 3:30 – 4:30 Improving Virtual Assistants with Computer Vision and NLP Multimodal systems Abhinav(IIT Ropar)