CSC News
Improving AI’s Ability to Identify Students Who Need Help
For Immediate Release
Matt Shipman | News Services | 919.515.6386
Jonathan Rowe | 919.515.2476
News Releases
Researchers have designed an artificial intelligence (AI) model that is better able to predict how much students are learning in educational games. The improved model makes use of an AI training concept called multi-task learning, and could be used to improve both instruction and learning outcomes.
Multi-task learning is an approach in which one model is asked to perform multiple tasks.
“In our case, we wanted the model to be able to predict whether a student would answer each question on a test correctly, based on the student’s behavior while playing an educational game called Crystal Island,” says Jonathan Rowe, co-author of a paper on the work and a research scientist in North Carolina State University’s Center for Educational Informatics (CEI).
“The standard approach for solving this problem looks only at overall test score, viewing the test as one task,” Rowe says. “In the context of our multi-task learning framework, the model has 17 tasks – because the test has 17 questions.”
The researchers had gameplay and testing data from 181 students. The AI could look at each student’s gameplay and at how each student answered Question 1 on the test. By identifying common behaviors of students who answered Question 1 correctly, and common behaviors of students who got Question 1 wrong, the AI could determine how a new student would answer Question 1.
This function is performed for every question at the same time; the gameplay being reviewed for a given student is the same, but the AI looks at that behavior in the context of Question 2, Question 3, and so on.
And this multi-task approach made a difference. The researchers found that the multi-task model was about 10 percent more accurate than other models that relied on conventional AI training methods.
“We envision this type of model being used in a couple of ways that can benefit students,” says Michael Geden, first author of the paper and a postdoctoral researcher at NC State. “It could be used to notify teachers when a student’s gameplay suggests the student may need additional instruction. It could also be used to facilitate adaptive gameplay features in the game itself. For example, altering a storyline in order to revisit the concepts that a student is struggling with.
“Psychology has long recognized that different questions have different values,” Geden says. “Our work here takes an interdisciplinary approach that marries this aspect of psychology with deep learning and machine learning approaches to AI.”
“This also opens the door to incorporating more complex modeling techniques into educational software – particularly educational software that adapts to the needs of the student,” says Andrew Emerson, co-author of the paper and a Ph.D. student at NC State.
The paper, “Predictive Student Modeling in Educational Games with Multi-Task Learning,” will be presented at the 34th AAAI Conference on Artificial Intelligence, being held Feb. 7-12 in New York, N.Y. The paper was co-authored by James Lester, Distinguished University Professor of Computer Science and director of CEI at NC State; and by Roger Azevedo of the University of Central Florida.
The work was done with support from the National Science Foundation, under grant DRL-1661202; and from the Social Sciences and Humanities Research Council of Canada, under grant SSHRC 895-2011-1006.
-shipman-
Note to Editors: The study abstract follows.
“Predictive Student Modeling in Educational Games with Multi-Task Learning”
Authors: Michael Geden, Andrew Emerson, Jonathan Rowe and James Lester, North Carolina State University; Roger Azevedo, University of Central Florida
Presented: Feb. 7-12 at the 34th AAAI Conference on Artificial Intelligence in New York, N.Y.
Abstract: Modeling student knowledge is critical in adaptive learning environments. Predictive student modeling enables formative assessment of student knowledge and skills, and it drives personalized support to create learning experiences that are both effective and engaging. Traditional approaches to predictive student modeling utilize features extracted from students’ interaction trace data to predict student test performance, aggregating student test performance as a single output label. We reformulate predictive student modeling as a multi-task learning problem, modeling questions from student test data as distinct “tasks.” We demonstrate the effectiveness of this approach by utilizing student data from a series of laboratory-based and classroom-based studies conducted with a game-based learning environment for microbiology education, Crystal Island. Using sequential representations of student gameplay, results show that multi-task stacked LSTMs with residual connections significantly outperform baseline models that do not use the multi-task formulation. Additionally, the accuracy of predictive student models is improved as the number of tasks increases. These findings have significant implications for the design and development of predictive student models in adaptive learning environments.
Return To News Homepage