Meta spent years figuring out how to handle political advertising on Facebook and Instagram. She set up systems and developed policies for what types of political ads were and were not allowed on her platforms.
But that was before the rise of consumer artificial intelligence.
On Wednesday, Meta introduced a new policy to address the effects of ai on political advertising. The Silicon Valley company said that starting next year it would require political advertisers around the world to disclose when they had used third-party artificial intelligence software in ads about political or social issues to synthetically represent people and events.
Meta added that it would prohibit advertisers from using their own ai-assisted software to create ads on political or social issues, as well as ads related to housing, employment, credit, health, pharmaceuticals or financial services. Those advertisers could use third-party ai tools, such as DALL-E and Midjourney imagers, but with disclosures.
“We believe this approach will allow us to better understand potential risks and create appropriate safeguards for the use of generative ai in ads related to potentially sensitive topics in regulated industries,” the company said. ai-disclosure-policy” title=”” rel=”noopener noreferrer” target=”_blank”>saying.
Meta is facing a wave of artificial intelligence tools that the public has adopted over the past year. As consumers have flocked to ChatGPT, Google Bard, Midjourney and other “generative ai” products, big tech companies like Meta have had to rethink how to handle a new era of manipulated or completely manipulated images, videos and audio. false.
Political advertising has long been a contentious issue for Meta. In 2016, Facebook was criticized for its lack of oversight after Russians used the social network’s ads to sow discontent among Americans. Since then, Mark Zuckerberg, founder and CEO of Meta, has spent billions of dollars working to crack down on misinformation and disinformation on the company’s platforms and has hired independent contractors to closely monitor political ads that pass through the system.
The company has also not shied away from allowing politicians to lie in ads on the platform, which Zuckerberg has defended on the grounds of free speech and public discourse. Meta has also been reluctant to limit the speech of elected officials. Nick Clegg, president of global affairs at Meta, has called for regulatory guidance on these issues rather than technology companies determining the rules.
Those who run political ads on Meta are currently required to complete an authorization process and include a “pay for” disclaimer on the ads, which are stored in the company’s public ad library for seven years so journalists and academics can study them. .
When Meta’s new ai policy goes into effect next year, political campaigns and marketers will be asked to disclose whether they used ai tools to alter ads. If they have done so and the ad is accepted, the company will publish it with the information that it was created with artificial intelligence tools. Meta said it would not require advertisers to disclose modifications that were “inconsequential or irrelevant to the claim, assertion or issue raised,” such as photo retouching and image cropping.
Ads on political and social issues that appear to have used artificial intelligence to alter images, videos and audio but have not disclosed that they have done so will be rejected, the company said. Organizations that repeatedly attempt to present these types of advertisements without disclosing information will be penalized, he added, without specifying what the sanctions might be. The company has long relied on third-party fact-checking partners to review, rate and potentially remove ads designed to spread misinformation.
By preventing advertisers from using the company’s own ai-assisted software to create ads about political or social issues, Meta can avoid headaches or litigation related to its ad technology.
In 2019, the Department of Justice sued the company for allowing advertisers to discriminate against Facebook users based on their race, gender, religion and other characteristics. The company eventually settled the lawsuit, agreeing to modify its advertising technology and pay a fine of $115,054.