This summer, many professors and lecturers are spending time experimenting with ai tools to help them prepare presentations, craft tests and assignment questions, and more. This is due in part to a host of new tools and updated features that ChatGPT has rolled out over the past few weeks.
As more instructors experiment with using generative ai to create teaching materials, an important question arises: should they disclose this to students?
It’s a fair question, given widespread concern in the field about students using ai to write their essays or robots to do their homework for them. If students are required to explain when and how they use ai tools, should educators do so as well?
When Marc Watkins returns to classrooms this fall to teach a course on digital media studies, he plans to make clear to students how he is using ai behind the scenes to prepare classes. Watkins is a professor of writing and rhetoric at the University of Mississippi and director of the university’s ai Summer Institute for Writing Teachers, an optional program for faculty.
“We need to be open, honest and transparent if we use ai,” he says. “I think it’s important to show them how to do it and how to model this behavior in the future,” Watkins says.
While it may seem logical for teachers and professors to clearly disclose when they use ai to develop teaching materials — such as asking students to do in assignments — Watkins notes that it’s not as simple as it seems. In universities, there’s a culture of professors taking materials from the web without always citing them. And she says K-12 teachers frequently use materials from a variety of sources, including syllabi and textbooks from their schools and districts, resources they’ve obtained from colleagues or found on websites, and materials they’ve purchased from marketplaces like Teachers Pay Teachers. But teachers rarely share with students where these materials come from.
Watkins says that a few months ago, when he saw a demo of a new feature in a popular learning management system that uses ai to help create materials with a single click, he asked a company official if they could add a button that would automatically watermark when ai was used to make that clear to students.
But the company hasn't been receptive, he says: “The impression I've gotten from developers – and this is what's so maddening about this whole situation – is that they're basically saying, 'Who cares?'”
Many educators seem to agree: in a technology/should-teachers-disclose-when-they-use-ai/2024/04#:~:text=For%20instance,%20if%20a%20teacher,it%20as%20an%20efficiency%20tool.%E2%80%9D” target=”_blank” rel=”noopener nofollow”>recent survey In a study by Education Week, about 80 percent of K-12 teachers who responded said they don’t need to inform students and parents when they use ai to plan lessons, and most educators surveyed said that also applies to assessment design and behavior tracking. In open-ended responses, some educators said they view it as a tool similar to a calculator, or like using content from a textbook.
But many experts say it depends on what the teacher is doing with ai. For example, a teacher might decide to skip a disclosure when doing something like using a chatbot to improve a draft of a paper or slide, but might want to make it clear if they use ai to do something like help grade assignments.
Just as educators are learning to use generative ai tools, they are also grappling with when and how to communicate what they are trying to communicate.
Lead by example
For Alana Winnick, director of educational technology at the Pocantico Hills Central School District in Sleepy Hollow, New York, it's important to make it clear to colleagues when you're using generative ai in a new way — one that people might not even realize is possible.
For example, when he first started using the technology to compose emails for staff, he included a line at the end that read: “Written in collaboration with artificial intelligence.” That’s because he had turned to an ai chatbot to ask for ideas to make his message “more creative and engaging,” he explains, and then “tweaked” the result to make the message his own. He imagines teachers could use ai in the same way to create assignments or lesson plans. “No matter what, the ideas should start with the human user and end with the human user,” he emphasizes.
But Winnick, who wrote a book on ai in education called “amazon.com/Generative-Age-Artificial-Intelligence-Education/dp/B0CD9BLCKN” target=”_blank” rel=”noopener nofollow”>The generative era: artificial intelligence and the future of education” and hosts A podcast The eponymous author believes that including such a disclosure note is temporary, not a fundamental ethical requirement, as she believes that this type of use of ai will become commonplace. “I don’t think that in 10 years we will have to do it,” she says. “I did it to raise awareness, normalize it and encourage it, and say, ‘It’s okay. ’”
For Jane Rosenzweig, director of the Harvard College Writing Center at Harvard University, the decision to add disclosure would depend on how the teacher uses ai.
“If an instructor were using ChatGPT to generate feedback on writing, I would absolutely expect them to tell students they were doing so,” he says. After all, the point of any writing instruction, he notes, is to help “two human beings communicate with each other.” When grading a student’s paper, Rosenzweig says he assumes the text was written by the student unless otherwise noted, and he imagines his students expect any feedback they receive to come from the human instructor unless told otherwise.
When EdSurge raised the question of whether teachers and professors should disclose to readers when they are using ai to create instructional materials, Our Higher Education NewsletterSome readers responded that they felt it was important to do so, as a learning moment for the students and for themselves.
“If we’re just using it to help with brainstorming, then maybe it’s not necessary,” said Katie Datko, director of distance learning and instructional technology at Mt. San Antonio College. “But if we’re using it as a co-creator of content, then we should apply the developing standards for citing ai-generated content.”
In search of political guidance
Since the launch of ChatGPT, many schools and universities have rushed to create policies on the appropriate use of ai.
But most of those policies don’t address the question of whether educators should tell students about how they’re using new generative ai tools, says Pat Yongpradit, chief academic officer at Code.org and leader of TeachAI, a consortium of several education groups working to develop and share guidance for educators on ai. (EdSurge is an independent newsroom that shares a parent organization with ISTE, which participates in the consortium. Learn more about EdSurge’s ethics and policies here and about supporters here.)
TO Toolkit for schools Posted by TeachAI recommends that: “If a teacher or student uses an ai system, its use should be disclosed and explained.”
But Yongpradit says his personal view is that “it depends” on the type of ai use involved. If the ai is just helping write an email, explaining, or even being part of a lesson plan, then disclosure may not be necessary. But there are other activities that he says are more central to teaching where disclosure should be necessary, such as when using ai grading tools.
However, even if an educator decides to cite an ai chatbot, the mechanics can be complicated, Yongpradit says. While there are major organizations, including the ai/?utm_campaign=sourcemar23&utm_medium=email&utm_source=mlaoutreach” target=”_blank” rel=”noopener nofollow”>Modern Languages Association and the American Psychological Association that have issued guidelines on how to cite generative ai, says the approaches remain clumsy.
“It’s like pouring new wine into old wineskins,” he says, “because you’re taking an outdated paradigm of taking and citing source material and applying it to a tool that doesn’t work the same way. Before, things involved humans and were static. It’s strange to fit ai into that model because ai is a tool, not a source.”
For example, the outcome of an ai chatbot depends heavily on how an instruction is phrased, and most chatbots give a slightly different response every time, even if the exact same instruction is used.
Yongpradit says she recently attended a panel discussion where an educator urged teachers to disclose their use of ai since they are asking their students to do so, which drew applause from the students present. But for Yongpradit, those situations are not equivalent.
“They are totally different things,” he says. “As a student, you are submitting your work as a grade to be graded. The teachers know how to do that. They are just making their job more efficient.”
That said, “if the teacher posts it and puts it on Teachers Pay Teachers, then yes, they should share it,” she adds.
The important thing, he says, will be for states, districts and other educational institutions to develop their own policies, so that the rules of the game are clear.
“If there is no direction, a wild west of expectations is created.”