When Chat GPT-4 came out, Cory Kohn was eager to bring it to the classroom. Kohn, a biology laboratory coordinator in an integrated sciences department at Claremont McKenna, Pitzer and Scripps universities, found the tool useful.
It promised to increase efficiency, he argued. But more than that, it would be important to teach his science students how to interact with the tool for their own careers, he first told EdSurge last April. In her opinion, it would be like familiarizing her students with an early version of the calculator, and students who hadn't encountered it would be at a disadvantage.
Kohn is not the only professor taking on generative ai. While he's excited about his potential, others aren't so sure what to think about it.
For businesses, artificial intelligence has proven to be immensely profitable and, according to some, has even improved overall profitability. edtech-funding-is-back-to-pre-pandemic-days-but-in-a-very-different-world/” target=”_blank” rel=”noopener nofollow”>amount of funding flowing into educational technology last year. This has led to a frantic race to commercialize educational tools like ai. But the desire of some business owners to use these tools as substitutes for teachers or personal tutors has sparked skepticism.
Conversations about the ethics of how these tools are implemented are also somewhat overshadowed, according to one observer. However, teachers are already deciding how (or even whether) to adopt these tools in the classroom. And the decisions those teachers make can be influenced by factors like how familiar they are with technology or even their gender, according to a new study.
A difference of opinion
People are still figuring out what the limits are of this shiny new piece of technology in education, says Stephen Aguilar, an assistant professor at the University of Southern California's Rossier School of Education. That can lead to mistakes, such as, in his opinion, considering chatbots as a substitute for instructors or paraprofessionals. Implementing these tools in this way assumes that rapid, iterative feedback drives critical thinking, when what students really need are deep conversations that take them in unexpected directions, Aguilar says.
If the tools are to deliver on their promise of improving education, Aguilar believes a deeper meditation on what generative ai can do will be necessary, one that goes beyond a focus on the tools' promise to catalyze efficiency.
Aguilar, a former sixth and seventh grade teacher in East Palo Alto, California, is now associate director of the Center for Society and Generative ai, which announced its launch, along with ai-and-society-chart-impact-ai-culture-education-media-and-society” target=”_blank” rel=”noopener nofollow”>$10 million in seed funding, last year. The center strives to map how ai is reshaping education so it can come up with useful recommendations for educators, Aguilar says. The goal is to truly understand what's happening on the front lines, because no one knows exactly what the major implications will be right now, she adds.
As part of her role at the center, Aguilar conducted research on how teachers think about ai in classrooms. He study, “How Teachers Navigate the Ethical Landscape of ai in Their Classrooms,” interviewed 248 K-12 teachers. Those teachers were mostly white and from public schools, which introduced limitations.
The main findings? That teachers' confidence or anxiety about using technology affected their thoughts about ai.
Perhaps most surprising, the study also found that teachers evaluate the ethical implications of these tools in different ways depending on their gender. According to the report, when thinking about ai, women tended to be more rule-based in their reasoning, considering what guidelines needed to be followed to use these tools beneficially. They focused on the need to maintain privacy or avoid bias or confusion arising from the tools. Men, on the other hand, tended to focus more on specific outcomes such as the ability to boost creativity, according to the report.
Artificial tools, human judgments
When EdSurge first spoke to Kohn, the lab coordinator, he was using ChatGPT as a teaching assistant in biology courses. He cautioned that he couldn't completely replace his human teaching assistants with a chatbot. Sometimes, he said, the chatbot just made a mistake. For example, he would recommend control variables when weighing student experiment designs that simply didn't make sense. Therefore, the usefulness of it had to be weighed on a case-by-case basis.
Kohn also teaches a first-year writing course, ai Chatbots in Science, and remains optimistic. He says his students use ChatGPT Plus, the paid version of OpenAI's ChatGPT, to brainstorm research questions, help digest scientific papers, and simulate data sets. They also conduct an ai review of your writing, Kohn says.
This fits with what Aguilar has observed so far about how the chatbot craze could affect writing instruction. Ultimately, Aguilar argues, large language models could represent an engaging way for students to reflect on their own writing. That's if students can approach them less as generators of writing and more as readers, she says, an extra set of digital eyes that can explore the text. To do this, it is still necessary for students to evaluate the feedback they receive from these tools, she adds.
Today, Kohn considers a chatbot to be a kind of TA-plus. He can perform the tasks of a human TA, he says, but also more varied jobs that a librarian or editor would traditionally have done, helping students examine literature or refine ideas.
Still, students have to use it wisely, he adds: “It's not a panacea for telling the truth.”