For years, educators have tried to draw lessons about students and the learning process from the data trails that students leave behind with every click on a computer. digital textbook, learning management system or other online learning tool. It is an approach known as “learning analysis.”
Today, learning analytics advocates are exploring how the advent of ChatGPT and other generative ai tools bring new possibilities (and raise new ethical questions) to the practice.
One possible application is to use new artificial intelligence tools to help educators and researchers understand all the student data they have been collecting. Many learning analytics systems have dashboards to provide teachers or administrators with metrics and visualizations about students based on their use of digital classroom tools. The idea is that the data can be used to intervene if a student shows signs of disengagement or deviance. But many educators are not used to sorting through large data sets of this type and may have difficulty navigating these analysis dashboards.
“Chatbots that leverage ai will be a kind of intermediary, a translator,” says Zachary Pardos, an associate professor of education at the University of California, Berkeley, one of the editors of a forthcoming paper. special issue of the Journal of Learning Analytics which will be dedicated to generative ai in the field. “The chatbot could incorporate 10 years of scientific literature on learning” to help analyze and explain in simple language what a dashboard shows, he adds.
Learning analytics advocates are also using new artificial intelligence tools to help analyze online course discussion forums.
“For example, if you're looking at a discussion forum and want to mark posts as 'on-topic' or 'off-topic,'” Pardos says, it previously required much more time and effort for a human researcher to follow a rubric to label such posts. publications, or to train an older type of computer system to classify the material. Now, however, large language models can easily mark discussion posts as on-topic or off-topic “with a minimal amount of quick engineering,” Pardos says. In other words, with just a few simple instructions for ChatGPT, the chatbot can sort through large amounts of student work and turn it into numbers that educators can quickly analyze.
Findings from learning analytics research are also being used to help train new ai-powered generative tutoring systems. “Traditional learning analytics models can track a student's level of knowledge mastery based on their digital interactions, and this data can be vectorized to feed into an LLM-based ai tutor to improve the relevance and performance of the ai tutor in your interactions with students. ” says Mutlu Cukurova, professor of learning and artificial intelligence at University College London.
Another big application is evaluation, says Pardos, the Berkeley professor. Specifically, new ai tools can be used to improve how educators measure and grade a student's progress through course materials. The hope is that new ai tools will make it possible to replace many multiple-choice exercises in online textbooks with fill-in-the-blank or essay questions.
“The accuracy with which LLMs seem to be able to rate open-ended response types seems very comparable to that of a human,” he says. “Therefore, we may see that there are now more learning environments capable of accommodating those more open-ended questions that cause students to show more creativity and different types of thinking than if they were looking for a single deterministic answer.”
Concerns about bias
However, these new ai tools pose new challenges.
One problem is algorithmic bias. These issues were already a cause for concern even before the emergence of ChatGPT. Researchers were concerned that when systems made predictions about an at-risk student based on large data sets about previous students, the result could be the perpetuation of historical inequalities. The response was to call for more transparency in the learning algorithms and data used.
Some experts worry that new generative ai models have what the editors of the Journal of Learning Analytics call a “notable lack of transparency in explaining how its results are produced,” and many ai experts worry that ChatGPT and other new tools also reflect cultural and racial biases in ways that are difficult to track or address.
Additionally, large language models are known to occasionally “hallucinate,” providing objectively inaccurate information in some situations, raising concerns about whether they can be made reliable enough to use in tasks such as helping to assess students.
For Shane Dawson, professor of learning analytics at the University of South Australia, new artificial intelligence tools make more pressing the question of who builds the algorithms and systems that will have the most power if learning analytics becomes more widely popularized in the world. schools and universities.
“There is a transfer of agency and power at all levels of the education system,” he said. in a recent talk. “In a classroom, when your K-12 teacher is sitting there teaching your child to read and hands you an iPad with an app (powered by ai), and that app makes a recommendation to that student, who now do you have the power? Who has agency in that classroom? “These are questions we need to address as a field of learning analytics.”