A new flood of child sexual abuse material created by artificial intelligence threatens to overwhelm authorities already held back by outdated technology and laws, according to a new report released Monday by Stanford University's Internet Observatory.
Over the past year, new artificial intelligence technologies have made it easier for criminals to create explicit images of children. Now, Stanford researchers warn that the National Center for Missing and Exploited Children, a nonprofit that serves as the central coordinating agency and receives most of its funding from the federal government, does not have the resources to fight the growing threat.
The organization's CyberTipline, created in 1998, is the federal clearinghouse for all reports of online child sexual abuse material, or CSAM, and is used by law enforcement to investigate crimes. But much of the advice received is incomplete or riddled with inaccuracies. Its small staff has also struggled to keep up with the volume.
“It is almost certain that in the coming years, CyberTipline will be inundated with very realistic-looking ai content, making it even more difficult for authorities to identify real children who need to be rescued,” said Shelby Grossman, one of the authors of the report.
The National Center for Missing and Exploited Children is on the front lines of a new battle against ai-created sexual exploitation images, an emerging crime area that is still being outlined by lawmakers and law enforcement. Amid an epidemic of ai-generated fake nudes circulating in schools, some lawmakers are already taking steps to ensure such content is considered illegal.
ai-generated CSAM images are illegal if they contain real children or if images of real children are used to train data, researchers say. But synthetic facts that do not contain real images could be protected as freedom of expression, according to one of the report's authors.
Public outrage over the proliferation of child sexual abuse images online exploded at a recent hearing with the CEOs of Meta, Snap, TikTok, Discord and x, who were criticized by lawmakers for not doing enough to protect children. young children online.
The Center for Missing and Exploited Children, which receives suggestions from individuals and companies such as facebook and Google, has advocated for legislation that would increase its funding and give it access to more technology. Stanford researchers said the organization provided access to interviews of employees and their systems so the report would show system vulnerabilities that need updating.
“Over the years, the complexity of reporting and the severity of crimes against children continue to evolve,” the organization said in a statement. “Therefore, leveraging emerging technology solutions throughout the CyberTipline process leads to more children being protected and offenders being held accountable.”
The Stanford researchers found that the organization needed to change the way its tip line worked to ensure that law enforcement could determine which reports involved ai-generated content, as well as ensure that companies that report potentially abusive material on their platforms complete the forms in their entirety.
Less than half of all reports made to CyberTipline were “actionable” in 2022, either because the companies that reported abuse did not provide enough information or because the image of a report had spread quickly online and been reported too many times. The suggestion line has an option to check if the content of the suggestion is a potential meme, but many don't use it.
In a single day earlier this year, a record one million reports of child sexual abuse material flooded the federal clearinghouse. For weeks, researchers worked to respond to the unusual surge. It turned out that many of the reports were related to a meme image that people were sharing across platforms to express outrage, not malicious intent. But it still consumed significant research resources.
That trend will worsen as ai-generated content accelerates, said Alex Stamos, one of the authors of the Stanford report.
“A million identical images is hard enough, a million separate images created by ai would break them,” Stamos said.
The Center for Missing and Exploited Children and its contractors cannot use cloud computing providers and must store images locally on computers. The researchers found that requirement makes it difficult to build and use the specialized hardware used to create and train ai models for their research.
The organization typically does not have the technology to widely use facial recognition software to identify victims and criminals. Much of report processing remains manual.