Education has had a rocky relationship with the still-evolving presence of generative ai in schools, with some school districts banning it only for reverse course. You can save teachers time by automating tasks while also causing headaches as an accomplice to cheating students.
So how much work would be needed to develop guidelines to help educators manage the challenges of using generative ai tools in their work? At Michigan, it was a team effort.
A coalition of 14 educational organizations, led by the nonprofit Michigan Virtual Learning Research Institute, ai/resource-bank/k12guidance/” target=”_blank” rel=”noopener nofollow”>published sample guidelines earlier this month that guide teachers and administrators through potential pitfalls to consider before using an ai tool in the classroom or for other tasks. That includes things like checking the accuracy of ai-generated content, citing ai-generated content, and judging what types of data are safe to input into an ai program.
Ken Dirkin, senior director of the Michigan Virtual Learning Research Institute, said the group wanted to create a document that was digestible, but “there are probably 40,000 important things that could have been included.”
“What we're experiencing when we go out and work with school districts is that there's a general lack of knowledge, interest, and awareness” of generative ai, Dirkin says, “but also fear of getting in trouble, or fear that they're doing something wrong.” , because there is no solid guidance on what they should explore or do.”
Dirkin says the group wanted the document to help school districts and educators think about using generative ai without falling to the extreme of banning it or allowing its unrestricted use.
“That's really been our modus operandi: How can we just enable browsing and not disable access,” he says, “or have people say, 'It's the latest trend and it's going away.'”
The speed at which generative ai is evolving makes this a critical time for educators and districts to have guidelines for when and how to use it, says Mark Smith, executive director of the Michigan Association of Computer Users in Learning. .
“ai is everywhere. He is doing everything he can for everyone and anyone who is interested,” she states. “By the time we have an idea of the one-, three-, and five-year plan, it will be changing right under our noses. If we do not address this now with an agile and flexible guiding policy or strategy as a collective whole, the situation will continue to change.”
Student data protection
School principals want to know how ai can be used in the classroom beyond having students copy and paste, says Paul Liabenow, and of course they worry that students will use it to cheat.
But many of the questions he receives as executive director of the Michigan Association of Elementary and Secondary School Principals focus on artificial intelligence programs and legal compliance with student privacy laws, Liabenow explains, and how remain in line with laws such as FERPA and the Individuals with Disabilities Education Act. .
“There are a lot of questions that come up weekly and they are increasing,” Liabenow says. Principals want guidance from organizations like Michigan Virtual “not only to avoid entering the black hole as a leader, but to use it effectively to improve student achievement.”
The ai guidance document urges educators to always assume that unless the company that owns a generative ai tool has an agreement with their school district, the data they input will be available to the public.
Liabenow says one of his concerns about confidentiality has to do with any teacher, counselor or administrator who wants to use an artificial intelligence program to manage student data on mental health or discipline, something that has the potential to end with a demand.
“People think that they will be able to run master schedules with ai tools, where they will enter the names of individual students, and that creates some ethical and legal challenges,” Liabenow says. “I love this guidance tool because it reminds us of areas where we need to be sensitive and diligent in protecting.”
Smith, of the Michigan Computer Users in Learning Association, says privacy issues are not in the everyday use of generative ai but in the growing number of applications that may have weak data protection policies, one of the agreements that virtually no one reads when registering for an online service. It may be easier to break privacy laws, he adds, given the technology/ftc-regulation-children-online-privacy.html” target=”_blank” rel=”noopener nofollow”>proposed changes to strengthen the Children's Online Privacy Protection Act.
“How many of us have downloaded the updated agreement for our iPhone without reading it?” Smith says. “If you expand that to 10,000 students in a district, you can imagine how many end-user agreements you would have to read.”
Is ai your co-author?
It is not just students' use of ai that needs to be considered. Generative ai is used by teachers to create lesson plans, and any school district employee could use it to help write a working document.
That's why the new guidelines include examples of how to cite the use of generative ai in educational materials, research materials, or working documents.
“The more we reveal the use of ai and its purpose, the more we encourage everyone in the conversation,” Dirkin says. “I don't think that in two or three years people will spread the use of ai (it will be in our workflows), but it is important to learn from each other and link it to human participation in the process. “It will eventually go away.”
When ai is integrated into everything
Generative ai is increasingly being integrated into software that is already widely used. Consider spell-checking programs like Grammarly, which a Georgia student says accused her of cheating after a paper she used it on was flagged by ai detection software.
That increasing ubiquity will make ai-based educational tools easier to access and therefore more complicated when it comes to using them with security in mind, Dirkin says. An important consideration about the current generative ai landscape is that people still have to copy and paste content into an ai program (and therefore pause for a moment) to use it.
“A lot of times, it's the Wild West in terms of access to tools. “Everyone has a Google account and people can use their Google account to sign in to a lot of free services,” says Dirkin. “We wanted to make sure people had a tool to think about whether they are using it legally or ethically, or if they are violating some kind of policy before doing so. So stop and think.”
Smith points to the section of the new guidelines that asks educators to think about how something generated by ai could be inaccurate or contain bias. Even as generative ai improves, he says, “all ai has risks and limitations, no matter how good it is.”
“Sometimes the best data set for an educator is the teacher next door with 10 more years of experience, and not an ai tool,” Smith says. “There is still a human element to this, and I think the guidance document mentioning those risks and limitations is kind of a friendly nudge. It's a polite way of saying, 'Hey, don't forget this.'”