Valerio Capraro, a psychology professor at the University of Milan, has recently noticed a rise in research claims from PhD applicants that appear to be at least partially generated by ai.
“I would say that almost all apps show traces of ai-generated content, and about half appear to be completely ai-generated,” says Capraro, who studies social behavior and ai. “This trend is very worrying.”
Recently, Capraro posted on social media about one such submission he received. After the post unexpectedly went viral, Capraro was shocked to learn that many educators were angry with him for assuming the article was ai-generated, and even accused him of discrimination; one angry educator contacted his department to complain.
“They said I was discriminating against people who didn’t speak English as their first language,” she says, adding that the accusation makes no sense in context. “In fact, all of our candidates don’t have English as their first language.”
In the end, Capraro did not accept this specific request, not because he suspected it was ai-generated, but because it was unconvincing, as is often the case with many ai-generated texts these days.
Ultimately, the incident highlights both the rise in ai-generated college applications, from both undergraduate and graduate and PhD students, and the challenges faced by those reading these applications. Existing ai detection tools are flawed, and as yet there is no consensus on how best to respond to ai-generated content.
<h2 id="using-ai-for-college-admissions”>Using ai for college admissions
As ai becomes more accessible, some educators are even questioning why preventing students from using ai-generated application materials is a problem.
Jeffrey Hancock, a communications professor at Stanford University, recently told CalMatters that Students could have stronger applications using ai-generated essaysHancock suggests doing this by custom-training a tool like ChatGPT with a mix of good and bad college essays. The ai can be told to mimic the good essays and avoid patterns in the bad ones. This strategy may be particularly appealing because many colleges have been slow to implement specific policies regarding ai use in application materials. However, it’s likely that if ai is found to be used, applications to most institutions would still be harmed.
The Common App, which is used by more than a million students a year, has a policy against ai use, Jackson Sternberg, a public relations specialist for the company, said via email. Students using the system must agree to the app’s terms of service, which prohibit the transmission of fraudulent information, and then sign a statement that what they are submitting is their own work. The app’s fraud policy explicitly prohibits submitting “substantive” ai-generated content.
“We investigate all allegations of fraud and, if substantiated, take appropriate disciplinary action,” Sternberg said.
Still, much of the selection process is left to the higher education institutions themselves. “Each member university processes and reviews applicant data according to its own policies and procedures,” Sternberg added.
What educators and students can do
Capraro believes the best way to proceed is to treat ai-generated materials the same way you would any other material. “It’s the content that counts,” he says. “ai-generated texts are often mediocre. They might look good for a high school essay, but if you’re an evaluator for an advanced position, such as a PhD fellowship, ai-generated texts often get very low marks—not because they’re ai-generated, but because they’re superficial and often incorrect. As an evaluator, I focus on content rather than form.”
Capraro says students completing various applications should be encouraged to use ai to assist, not replace, their writing. As a non-native English speaker, he has used ai to greatly improve his English writing over the past year. For this reason, he doesn’t think applicants should necessarily be banned from using ai. “They should be discouraged from relying too heavily on it as a shortcut to avoid work,” he says. “Applicants should understand that there are no real shortcuts in professional settings. Especially for competitive positions, personal input is critical.”
Capraro adds that ai will ultimately not be good enough for the most ambitious candidates and that this could be the most effective message for students. “I think people who rely too much on ai in the next few years will just regress to the average and get the average jobs,” he says.