The Federal Communications Commission on Thursday Unwanted robocalls generated by artificial intelligence are prohibitedamid growing concerns about election misinformation and consumer fraud facilitated by technology.
The FCC's unanimous decision cited a three-decade-old law aimed at curbing spam phone calls, and clarified that ai-generated spam calls are also illegal. In doing so, the agency said it expanded states' ability to prosecute creators of unsolicited robocalls.
“It seems like something from the distant future, but it's here,” FCC Chairwoman Jessica Rosenworcel said in a statement. “Bad actors are using ai-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters.”
Concerns about using ai to replicate the voices and images of politicians and celebrities have grown in recent months as technology to recreate people has taken off, particularly ahead of the US presidential election in November.
Those concerns came to a head late last month, when thousands of voters received an unsolicited robocall from a fake voice of President Biden, instructing voters to abstain from voting in the first primaries of the election season. The state attorney general's office announced this week that it had opened a criminal investigation into a Texas-based company it believes is behind the robocall. The caller ID was spoofed to make it appear that the calls came from the former chairwoman of the New Hampshire Democratic Party.
ai has also been used to create completely fake videos and advertisements that imitate the voices and images of celebrities and politicians. That includes fake, unapproved videos of actor Tom Hanks promoting dental plans and one with sexually explicit content of singer Taylor Swift.
Lawmakers have called for legislation to ban ai deepfakes in political ads, but no bill has gained traction in Congress. In the void of federal legislation, more than a dozen states have passed laws limiting the use of ai in political ads.