Once upon a time, educators worried about the dangers of CliffsNotes, study guides that presented great works of literature as a series of vignettes that many students used as a replacement for reading.
Today, that certainly seems quaint.
Suddenly, new consumer ai tools have hit the market that can take any piece of text, audio, or video and provide the same kind of simplified summary. And those summaries aren't just a series of clever bulleted texts. Nowadays, students can have tools like Google's NotebookLM ai-fake-podcasts-research” target=”_blank” rel=”noopener nofollow”>turn your class notes into a podcastwhere happy-sounding ai robots joke around and discuss key points. Most of the tools are free and do their job in seconds with the click of a button.
Naturally, all of this is causing concern among some educators, who see students offloading the hard work of synthesizing information to ai at a pace never before possible.
But the bigger picture is more complicated, especially as these tools become more common and their use begins to become standard in business and other contexts beyond the classroom.
And the tools serve as a particular lifeline for neurodivergent students, who suddenly have access to services that can help them organize and support their reading comprehension, teaching experts say.
“There is no universal answer,” says Alexis Peirce Caudell, a computer science professor at Indiana University in Bloomington, who recently conducted an assignment in which many students shared their experiences and concerns about ai tools. “Biology students will use it one way, chemistry students will use it another way. “All my students use it in different ways.”
It's not as simple as assuming that all students are cheaters, the instructor emphasizes.
“Some students were concerned about the pressure to interact with the tools: if all their peers were doing it, they should be doing it even if they felt it got in the way of their authentic learning,” he says. Questions like, “Is this helping me get through this specific assignment or this specific exam because I'm trying to navigate five classes and internship applications,” are asked, but at the cost of learning?
All of this adds new challenges to schools and universities trying to set limits and policies for the use of ai in their classrooms.
Need for 'friction'
It seems like almost every week (or even every day) tech companies announce new features that students are adopting in their studies.
Last week, for example, Apple launched Apple Intelligence features for iPhone, and one of the features ai-iphone-beta-features-siri-users-1851684957?utm_source=quartz_newsletter_breaking&utm_medium=email&utm_campaign=2024-10-30_breaking” target=”_blank” rel=”noopener nofollow”>You can rework any fragment of text in different shades.as casual or professional. And last month, OpenAI, creator of ChatGPT, released a feature called Canvas which includes sliders for users to instantly change the reading level of a text.
Marc Watkins, a writing and rhetoric professor at the University of Mississippi, says he worries that students are drawn to the time-saving promises of these tools and don't realize that using them can mean skipping the real work involved. You need to internalize and remember the material.
“From a teaching and learning point of view, that worries me quite a bit,” he says. “Because we want our students to have a little bit of difficulty, to have a little bit of friction, because that's important to their learning.”
And he says the new features are making it harder for teachers to encourage students to use ai in useful ways, such as teaching them how to create prompts to change the writing level of something: “It eliminates that last desirable level of difficulty when they can just press a button.” mash and get a final draft and get feedback on the final draft as well.”
Even professors and universities that have adopted ai policies may need to rethink them in light of these new types of capabilities.
As two teachers put it ai-policy-already-obsolete-opinion” target=”_blank” rel=”noopener nofollow”>a recent opinion piece“Their ai policy is already outdated.”
“A student who reads an article you uploaded but can't remember a key point uses the ai assistant to summarize or remind them where they read something. Did this person use ai when there was a ban in class? ask the authors, Zach Justus, director of faculty development at California State University, Chico, and Nik Janos, a sociology professor there. They note that popular tools like Adobe Acrobat now have “ai assistant” features that can summarize documents with the press of a button. “Even when we are evaluating our colleagues on tenure and promotion records,” the professors write, “is it necessary to promise not to push the button when you are reviewing hundreds of pages of student evaluations of teaching?”
Instead of writing and rewriting ai policies, professors argue that educators should develop broad frameworks for what is acceptable help from chatbots.
But Watkins calls on ai tool makers to do more to mitigate misuse of their systems in academic settings, or as he said when EdSurge spoke to him, “to make sure that this tool that is so prominently used by students (It is) really effective for your learning and not just as a tool to download it.”
Uneven precision
These new ai tools pose a host of new challenges beyond those that were in play when paper CliffsNotes were the study tool of the day.
One is that ai summarization tools don't always provide accurate information, due to a large language model phenomenon known as “hallucinations,” when chatbots guess facts but present them to users as sure things.
When Bonni Stachowiak first tested the podcast feature in Google's NotebookLM, for example, she said she was impressed by how realistic the robot's voices sounded and how well they seemed to summarize the documents she provided. Stachowiak is the host of the long-running podcast, Teaching in higher educationand dean of teaching and learning at Vanguard University of Southern California, and regularly experiments with new ai tools in her teaching.
But as he tested the tool more and included documents on complex topics he knew well, he noticed occasional errors or misunderstandings. “It just flattens it out, you lose all these nuances,” he says. “It sounds very intimate because it is a voice and audio is a very intimate medium. But as soon as it was something you knew a lot about, it was going to fall apart.”
Still, she says she's found NotebookLM's podcasting feature useful in helping her understand and communicate bureaucratic issues at her university, such as turning part of the faculty handbook into a podcast summary. When he checked with colleagues who knew the policies well, he said they felt he did a “perfectly good job.” “It's very good for making two-dimensional bureaucracy more accessible,” he says.
Indiana University's Peirce Caudell says his students have also raised ethical issues with the use of artificial intelligence tools.
“Some say they are really concerned about the environmental costs of generative ai and its use,” he says, noting that ChatGPT and other ai models They require large amounts of computing power and electricity..
Others, he adds, worry about the amount of data users end up providing to ai companies, especially when students use free versions of the tools.
“We're not going to have that conversation,” he says. “Aren't we having conversations about what it means to actively resist the use of generative ai?”
Still, the instructor is seeing positive impacts for students, such as when they use a tool to help make flashcards for studying.
And she heard about a student with ADHD who had always found reading long text “overwhelming,” but who was using ChatGPT “to get over the hump of that initial engagement with reading and then they were checking their understanding using ChatGPT. “
And Stachowiak says he's heard of other ai tools that students with intellectual disabilities are using, such as one which helps users break down large tasks into smaller, more manageable subtasks.
“This is not cheating,” he emphasizes. “It's about breaking things down and estimating how long something is going to take. That’s not something that comes naturally to a lot of people.”