July 12, 2020
2nd AIMA4EDU Workshop in IJCAI 2020
AI-based Multimodal Analytics
for Understanding Human Learning
in Real-world Educational Contexts
IJCAI 2020 Yokohama Japan
We invite submissions in the following categories:
Meeting notes are listed here
Paper Links
YueqiWang, Zachary Pardos
Carla Guerra, Francisco S. Melo, Manuel Lopes
Fangli Xu, LingfeiWu, Shimeng Peng, Richard Jiarui Tong
Full Paper
Short Paper
Demo/Analytics Paper
4-6 pages
Full papers should present original research work.
2-3 pages
Short papers can be position or early results papers.
2-3 pages
Demo papers should describe a demonstration.
till AIMA4EDU Workshop inaugurates
About AIMA4EDU
Human learning is a complex interactive and iterative process that takes place at a very finegrained level. However, our ability to understand this fascinating latent learning process is often limited by what we can perceive and how we can measure. Recent advances of sensing technology and accompanying techniques for processing multimodal data, which manifest the
psychological as well physiological processes during the human learning process, give us a new opportunity to look at this classical problem with a new pair of lens. The emerging new type of data includes, but not limited to, student’s physiological signals such as EKG or EEG waveforms, students’ speech, facial expressions and postures, within the context of particular learning activities. We are particularly interested in those data gathered from the real-world educational activities versus those from the controlled lab environment.
IMPORTANT DATES /
June 15th, 2020
paper submission ultimate deadline
July 30th, 2020
Notification of acceptance/rejection
August 15th, 2020
Camera-ready version deadline
Jan 7th, 2021
Conference hold
​
Corresponding time zone
7 Jan, 9am – 2pm, Japan (GMT+9)
7 Jan, 8am – 1pm, China (GMT+8)
7 Jan, 11am – 4pm, Sydney (GMT+11)
6 Jan, 7pm - 12am, New York (GMT-5)
6 Jan, 4pm – 9pm, California (GMT-8)
Schedule of the half-day virtual workshop (UTC time zone)
1) 12:00 ~ 12:15 am
Opening marks
2) 12:15 ~ 12:45 am
Invited talk 1
A Mixed-Reality AI System to Support STEM Learning, Nesra Yannier (CMU)
3) 12:45 ~ 1:15 am
Invited talk 2
Multimodal AI Approaches, LP Morency (CMU)
4) 1:15 ~ 2:00 am
Panel Discussion - Tom, Jia Xue (UT), Zach
5) 2:00 ~ 2:15 am
Break
6) 2:15 ~ 3:15 am
Presentations of papers
• Yueqi Wang, et al., BertKT: A Purely Attention-Based, Bidirectional Deep Learning
Architecture for Knowledge Tracing
• Carla Guerra, et al., Interactive Teaching with Unknown Classes of Bayesian Learners
• Fangli Xu, et al., Predicting Students’ Performance in Online Adaptive Learning
System Using Attention Extracted from EEG
7) 3:15 ~ 3:45 am
Invited talk 3
Affective artificial intelligence and learning, Kang Lee (UT)
8) 3:45 ~ 4:15 am
Invited talk 4
State of the Art, Challenges, and Future Directions of
Multimodal Machine Learning in Education, Zitao Jerry Liu (TAL)
9) 4:15 ~ 4:45 am
Closing Discussion - Edgar, Wei Cui, Guodong
10) 4:45 ~ 5:00 am
Final Remarks
Organizing Committee
Program
Committee
(In Alphabetical Order)
Zachary Pardos (University of California Berkeley)
Lujie Karen Chen (CMU)
Ying Yao (University of Toronto)
Jing Jiang (University of Technology Sydney)
Lu Liu (University of Technology Sydney)
Richard Tong (Squirrel AI Learning)
Tony Chen (The University of Queensland)
Tianyi Zhou (University of Washington)