Since the launch of ChatGPT almost a year ago, teachers have debated whether to ban the tool (for fear that students will use it to cheat) or adopt it as a teaching aid (arguing that the tool could boost learning and will become key in the workplace).
But most students in K-12 schools are not old enough to use ChatGPT without the permission of a parent or guardian, according to the tool’s own rules.
When OpenAI launched a ai-generated-content-as-their-own” target=”_blank” rel=”noopener nofollow”>new frequently asked questions for educators In September, a detail surprised some observers. He stated that children under 13 are not allowed to register (which is pretty typical, in compliance with federal privacy laws for young children), but also stated that “users between 13 and 18 years old must have the permission of their parents or guardians to use the platform.”
That means most US middle and high school students can’t even try ChatGPT without parental approval, even if their schools or teachers want to adopt the technology.
“In my eighteen years of working in education… I have never encountered a platform that requires such a strange letter of consent,” wrote Tony DePrato, chief information officer at St. Andrew’s Episcopal School in Mississippi, in an essay at the beginning of this year.
In a follow-up interview this week, DePrato noted that a likely reason for this unusual policy is that “the data in OpenAI still can’t be easily filtered or monitored, so what choice do they have?” He added that many schools have policies that require them to filter or monitor information viewed by students to block profanity, age-restricted images and videos, or material that may violate copyright.
To Derek Newton, a journalist who writes a newsletter on academic integrityThe policy appears to be an effort by OpenAI to deflect concerns that many students use ChatGPT to cheat on assignments.
“It seems like their only reference to academic integrity is buried under a parental consent clause,” he told EdSurge.
He points to a section of the OpenAI FAQ that notes: “We also understand that some students may have used these tools for assignments without disclosing their use of ai. In addition to potentially violating the school’s honor codes, such instances may go against our terms of use.”
Newton argues that the document ends up providing little concrete guidance to educators teaching non-minors (like, for example, most college students) how to combat the use of ChatGPT for cheating. This is especially true as the paper goes on to note that tools designed to detect whether an assignment has been written by a bot have proven to be ineffective or, worse, prone to falsely accusing students who wrote their own assignments. As the company’s own FAQ says: “Even if these tools could accurately identify ai-generated content (which they still can’t), students can make small modifications to evade detection.”
EdSurge has contacted OpenAI for comment. Niko Felix, a spokesperson for OpenAI, said in an email that “our audience is broader than just edtech, so we consider requiring parental consent for children ages 13 to 17 a best practice.”
Felix pointed out resources the company created for educators to use the tool effectively, including a guide with sample prompts. He said officials were not available for an interview at press time.
ChatGPT does not verify whether users between 13 and 17 years old have obtained parental permission, Félix confirmed.
Not everyone thinks that requiring parental consent for minors to use artificial intelligence tools is a bad idea.
“I actually think it’s good advice until we better understand how this ai will actually impact our kids,” says James Diamond, assistant professor of education and faculty director of the Educational technology and Learning in the Digital Age program at Johns. Hopkins University. “I’m a fan of younger students using the tool with someone who can guide them, whether it’s a teacher or someone at home.”
Since the rise of ChatGPT, many other tech giants have launched their own similar ai chatbots. And some of those tools don’t allow minors to use them at all.
Google’s Bard, for example, is prohibited for minors. “To use Bard, you must be 18 years or older,” it says your frequently asked questionsadding that “You cannot access Bard with a Google account managed by Family Link or with a Google Workspace for Education account designated as a child under 18.”
However, regardless of these established rules, teenagers seem to be using artificial intelligence tools.
TO recent survey by financial research firm Piper Sandler found that 40 percent of teens reported using ChatGPT in the past six months, and many are likely doing so without asking any adults for permission.