A coalition of 20 technology companies signed an agreement on Friday to help prevent ai fakes in the critical 2024 elections taking place in more than 40 countries. OpenAI, Google, Meta, Amazon, Adobe and X are among the companies joining the pact to prevent and combat ai-generated content that could influence voters. However, the agreement's vague language and lack of binding enforcement call into question whether it goes far enough.
The list of companies signing the “tech Agreement to Combat Deceptive Use of ai in the 2024 Elections” includes those that create and distribute ai models, as well as social platforms where deepfakes are most likely to appear. The signatories are Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection ai, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability ai, TikTok, Trend Micro, Truepic and X (formerly Twitter ). ).
The group describes the agreement as “a set of commitments to implement technology that counters harmful ai-generated content intended to mislead voters.” The signatories have agreed to the following eight commitments:
-
Develop and implement technology to mitigate risks related to misleading ai election content, including open source tools where appropriate.
-
Evaluate models within the scope of this agreement to understand the risks they may present with respect to misleading ai election content.
-
Seeking to detect the distribution of this content on their platforms
-
Seeking to adequately address these contents detected on their platforms.
-
Build cross-sector resilience to misleading electoral content from ai
-
Provide transparency to the public about how the company addresses it.
-
Continue to collaborate with a diverse set of global civil society organizations and academics.
-
Support efforts to foster public awareness, media literacy and society-wide resilience.
The agreement will apply to audio, video and images generated by ai. It addresses content that “falsifies or deceptively alters the appearance, voice or actions of political candidates, election officials and other key stakeholders in a democratic election, or that provides false information to voters about when, where and how they can vote. “
The signatories say they will work together to create and share tools to detect and address the online distribution of deepfakes. In addition, they plan to promote educational campaigns and “provide transparency” to users.
OpenAI, one of the signatories, already said last month that it plans to suppress election-related misinformation around the world. Images generated with the company's DALL-E 3 tool will be encoded with a classifier that will provide a digital watermark to clarify their origin as ai-generated images. The maker of ChatGPT said it would also work with journalists, researchers and platforms to get feedback on its provenance classifier. He also plans to prevent chatbots from impersonating candidates.
“We are committed to protecting the integrity of elections by implementing policies that prevent abuse and improve transparency around ai-generated content,” Anna Makanju, vice president of global affairs at OpenAI, wrote in the joint press release of the cluster. “We look forward to working with industry partners, civil society leaders and governments around the world to help protect elections from the misleading use of ai.”
Notably absent from the list is Midjourney, the company with an ai image generator (of the same name) that currently produces some of the most convincing fake photos. However, the company said earlier this month that it would consider banning political generations entirely during the election season. Last year, Midjourney was used to create a viral fake image of Pope Benedict unexpectedly walking down the street in a puffy white jacket. One of Midjourney's closest competitors, Stability ai (makers of the open source software Stable Diffusion), participated. Engadget has reached out to Midjourney for comment on its absence and we'll update this article if we hear back.
Only Apple is absent among the “big five” of Silicon Valley. However, this can be explained by the fact that the iPhone maker has not yet launched any generative ai products nor does it host a social media platform where deepfakes can be distributed. Anyway, we reached out to Apple PR for clarification but had not heard back at the time of publication.
While the general principles that the 20 companies agreed to sound like a promising start, it remains to be seen whether a flexible set of agreements without binding enforcement will be enough to combat a nightmare scenario in which the world's bad actors use generative ai to influence public opinion and elect aggressively undemocratic candidates, in the United States and elsewhere.
“The language is not as strong as you might expect,” said Rachel Orey, senior associate director of the Bipartisan Policy Center's Elections Project. ai-generated-election-deepfakes-munich-accord-meta-google-microsoft-tiktok-x-c40924ffc68c94fac74fa994c520fc06″ rel=”nofollow noopener” target=”_blank” data-ylk=”slk:told;elm:context_link;elmt:doNotAffiliate;cpos:8;pos:1;itc:0;sec:content-canvas”>said The Associated Press on Friday. “I think we should give credit where credit is due and recognize that companies have a vested interest in ensuring that their tools are not used to undermine free and fair elections. That said, it is voluntary and we will be attentive to whether they comply.”
ai-generated deepfakes have already been used in the US presidential elections. Back in April 2023, the Republican National Committee (RNC) ran an ad using ai-generated images of President Joe Biden and Vice President Kamala Harris. The campaign of Ron DeSantis, who has since dropped out of the Republican primary, followed up with ai-generated images of rival and likely nominee Donald Trump in June 2023. Both included easy-to-overlook disclaimers that the images were generated by ai .
In January, two Texas-based companies used an ai-generated spoof of President Biden's voice to make robocalls to New Hampshire voters, urging them not to vote in the state's primary on January 23. The clip, generated using ElevenLabs' voice cloning tool, reached up to 25,000 NH voters, according to the state attorney general. ElevenLabs is among the signatories of the pact.
The Federal Communications Commission (FCC) acted quickly to prevent further abuses of voice cloning technology in fake campaign calls. Earlier this month, it voted unanimously to ban ai-generated robocalls. The (seemingly eternally deadlocked) US Congress has not passed any ai legislation. In December, the European Union (EU) agreed to a sweeping ai Law security development bill that could influence other nations' regulatory efforts.
“As society embraces the benefits of ai, we have a responsibility to help ensure these tools are not weaponized in elections,” Microsoft Vice President and President Brad Smith wrote in a press release. “ai did not create electoral deception, but we must ensure that it does not contribute to deception to flourish.”