artificial intelligence companies have been at the forefront of developing this transformative technology. Now they are also racing to set limits on how ai is used in a year packed with major elections around the world.
Last month, OpenAI, the creator of the ChatGPT chatbot, saying It was working to prevent abuse of its tools in elections, in part by banning their use to create chatbots that pretend to be real people or institutions. In recent weeks, Google also said it would limit its ai chatbot, Bard, from responding to certain election-related prompts to avoid inaccuracies.. And Meta, owner of Facebook and Instagram, promised best ai generated label content on their platforms so that voters could more easily discern what information was real and what was false.
On Friday, Anthropic, another major ai startup, joined its peers in banning its technology from being applied to political campaigns or lobbying. In a blog post, the company, which makes a chatbot called Claude, said it would warn or suspend any user who violates its rules. He added that he was using trained tools to automatically detect and block misinformation and influence operations.
“The history of ai deployment has also been full of surprises and unexpected effects,” the company said. “We expect that in 2024 there will be surprising uses of artificial intelligence systems, uses that were not foreseen by their own developers.”
The efforts are part of a push by artificial intelligence companies to control a technology they popularized as billions of people went to the polls. According to the consulting firm Anchor Change, at least 83 elections are expected around the world this year, the largest concentration for at least the next 24 years. In recent weeks, people in Taiwan, Pakistan and Indonesia have voted, and India, the world's largest democracy, is scheduled to hold its general election in the spring.
It's unclear how effective the restrictions on ai tools will be, especially as tech companies press ahead with increasingly sophisticated technology. On Thursday, OpenAI unveiled Sora, a technology that can instantly generate realistic videos. These tools could be used to produce text, sounds and images in political campaigns, blurring fact and fiction and raising questions about whether voters can distinguish what content is real.
ai-generated content has already appeared in American political campaigns, prompting regulatory and legal pushback. Some state lawmakers are drafting bills to regulate ai-generated political content.
Last month, New Hampshire residents received robocall messages discouraging them from voting in the state primary in a voice that was likely artificially generated to sound like President Biden. The Federal Communications Commission banned these types of calls last week.
“Bad actors are using ai-generated voices in unsolicited robocalls to extort vulnerable family members, impersonate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said at the time.
ai tools have also created misleading or deceptive representations of politicians and political issues in Argentina, Australia, Brittany and Canada. Last week, former Prime Minister Imran Khan, whose party won the most seats in Pakistan's elections, used an ai voice to declare victory while in prison.
In one of the most consequential election cycles in living memory, the misinformation and hoaxes that ai can create could be devastating to democracy, experts said.
“We're behind the eight ball here,” said Oren Etzioni, a University of Washington professor who specializes in artificial intelligence and founder of True Media, a nonprofit that works to identify online misinformation in political campaigns. “We need tools to respond to this in real time.”
Anthropic said in its Friday announcement that it was planning tests to identify how its Claude chatbot could produce biased or misleading content related to political candidates, policy issues and election administration. These “red team” tests, which are often used to breach a technology's safeguards and better identify its vulnerabilities, will also explore how the ai responds to harmful queries, such as prompts requesting voter suppression tactics.
In the coming weeks, Anthropic will also launch a test that aims to redirect U.S. users who have voting-related queries to authoritative sources of information such as TurboVote from Democracy Works, a nonprofit, nonpartisan group. The company said its ai model was not trained frequently enough to provide reliable real-time data on specific elections.
Similarly, OpenAI said last month that it planned to point people to voting information through ChatGPT, as well as label ai-generated images.
“Like any new technology, these tools come with benefits and challenges,” OpenAI said in a blog post. “They are also unprecedented and we will continue to evolve our approach as we learn more about how our tools are used.”
(The New York Times sued OpenAI and its partner, Microsoft, in December, alleging copyright infringement of news content related to artificial intelligence systems.)
Synthesia, a startup with an ai video generator that has been linked to disinformation campaigns, also bans the use of technology for “news-like content,” including false, polarizing, divisive or misleading material. The company has improved the systems it uses to detect misuse of its technology, said Alexandru Voica, head of policy and corporate affairs at Synthesia.
Stability ai, a startup with an image-generating tool, said it prohibited the use of its technology for illegal or unethical purposes, worked to block the generation of unsafe images, and applied an imperceptible watermark to all images.
The biggest technology companies have also weighed in. Last week, Meta said it was collaborating with other companies on technology standards to help recognize when content was generated with artificial intelligence. Ahead of the European Union parliamentary elections in June, TikTok said in a blog post on Wednesday that would ban potentially misleading manipulated content and require users to tag realistic ai creations.
Google said in December which would also require video creators on YouTube and all election advertisers to disclose digitally generated or altered content. The company said it was preparing for the 2024 election by restricting its ai tools, such as Bard, from returning answers to certain election-related queries.
“Like any emerging technology, ai presents new opportunities and challenges,” Google said. ai can help combat abuse, the company added, “but we are also preparing for how it can change the misinformation landscape.”