When wildfires swept across Maui last month with destructive fury, China’s increasingly resourceful information warriors pounced.
The disaster was not natural, they said in a flurry of false posts that spread across the internet, but was the result of a secret “weather weapon” being tested by the United States. To bolster the plausibility, the posts carried photographs that appeared to have been generated by artificial intelligence programs, making them among the first to use these new tools to bolster the aura of authenticity of a disinformation campaign.
For China — which largely stood on the sidelines of the 2016 and 2020 U.S. presidential elections while Russia ran hacking operations and disinformation campaigns — the effort to cast the wildfires as a deliberate act by American intelligence agencies and the military was a rapid change of tactics.
Until now, China’s influence campaigns have been focused on amplifying propaganda defending its policies on Taiwan and other subjects. The most recent effort, revealed by researchers from Microsoft and a range of other organizations, suggests that Beijing is making more direct attempts to sow discord in the United States.
The move also comes as the Biden administration and Congress are grappling with how to push back on China without tipping the two countries into open conflict, and with how to reduce the risk that A.I. is used to magnify disinformation.
The impact of the Chinese campaign — identified by researchers from Microsoft, Recorded Future, the RAND Corporation, NewsGuard and the University of Maryland — is difficult to measure, though early indications suggest that few social media users engaged with the most outlandish of the conspiracy theories.
Brad Smith, the vice chairman and president of Microsoft, whose researchers analyzed the covert campaign, sharply criticized China for exploiting a natural disaster for political gain.
“I just don’t think that’s worthy of any country, much less any country that aspires to be a great country,” Mr. Smith said in an interview on Monday.
China was not the only country to make political use of the Maui fires. Russia did as well, spreading posts that emphasized how much money the United States was spending on the war in Ukraine and that suggested the cash would be better spent at home for disaster relief.
The researchers suggested that China was building a network of accounts that could be put to use in future information operations, including the next U.S. presidential election. That is the pattern that Russia set in the year or so leading up to the 2016 election.
“This is going into a new direction, which is sort of amplifying conspiracy theories that are not directly related to some of their interests, like Taiwan,” said Brian Liston, a researcher at Recorded Future, a cybersecurity company based in Massachusetts.
If China does engage in influence operations for the election next year, U.S. intelligence officials have assessed in recent months, it is likely to try to diminish President Biden and raise the profile of former President Donald J. Trump. While that may seem counterintuitive to Americans who remember Mr. Trump’s effort to blame Beijing for what he called the “China virus,” the intelligence officials have concluded that Chinese leaders prefer Mr. Trump. He has called for pulling Americans out of Japan, South Korea and other parts of Asia, while Mr. Biden has cut off China’s access to the most advanced chips and the equipment made to produce them.
China’s promotion of a conspiracy theory about the fires comes after Mr. Biden vented in Bali last fall to Xi Jinping, China’s president, about Beijing’s role in the spread of such disinformation. According to administration officials, Mr. Biden angrily criticized Mr. Xi for the spread of false accusations that the United States operated biological weapons laboratories in Ukraine.
There is no indication that Russia and China are working together on information operations, according to the researchers and administration officials, but they often echo each other’s messages, particularly when it comes to criticizing U.S. policies. Their combined efforts suggest a new phase of the disinformation wars is about to begin, one bolstered by the use of A.I. tools.
“We don’t have direct evidence of coordination between China and Russia in these campaigns, but we’re certainly finding alignment and a sort of synchronization,” said William Marcellino, a researcher at RAND and an author of a new report warning that artificial intelligence will enable a “critical jump forward” in global influence operations.
The wildfires in Hawaii — like many natural disasters these days — spawned numerous rumors, false reports and conspiracy theories almost from the start.
Caroline Amy Orr Bueno, a researcher at the University of Maryland’s Applied Research Lab for Intelligence and Security, reported that a coordinated Russian campaign began on Twitter, the social media platform now known as X, on Aug. 9, a day after the fires started.
It spread the phrase, “Hawaii, not Ukraine,” from one obscure account with few followers through a series of conservative or right-wing accounts like Breitbart and ultimately Russian state media, reaching thousands of users with a message intended to undercut U.S. military assistance to Ukraine.
China’s state media apparatus often echoes Russian themes, especially animosity toward the United States. But in this case, it also pursued a distinct disinformation campaign.
Recorded Future first reported that the Chinese government mounted a covert campaign to blame a “weather weapon” for the fires, identifying numerous posts in mid-August falsely claiming that MI6, the British foreign intelligence service, had revealed “the amazing truth behind the wildfire.” Posts with the exact language appeared on social media sites across the internet, including Pinterest, Tumblr, Medium and Pixiv, a Japanese site used by artists.
Other inauthentic accounts spread similar content, often accompanied with mislabeled videos, including one from a popular TikTok account, The Paranormal Chic, that showed a transformer explosion in Chile. According to Recorded Future, the Chinese content often echoed — and amplified — posts by conspiracy theorists and extremists in the United States, including white supremacists.
The Chinese campaign operated across many of the major social media platforms — and in many languages, suggesting it was aimed at reaching a global audience. Microsoft’s Threat Analysis Center identified inauthentic posts in 31 languages, including French, German and Italian, but also in less prominent ones like Igbo, Odia and Guarani.
The artificially generated images of the Hawaii wildfires identified by Microsoft’s researchers appeared on multiple platforms, including a Reddit post in Dutch. “These specific A.I.-generated images appear to be exclusively used” by Chinese accounts used in this campaign, Microsoft said in a report. “They do not appear to be present elsewhere online.”
Clint Watts, the general manager of Microsoft’s Threat Analysis Center, said that China appeared to have adopted Russia’s playbook for influence operations, laying the groundwork to influence politics in the United States and other countries.
“This would be Russia in 2015,” he said, referring to the bots and inauthentic accounts Russia created before its extensive online influence operation during the 2016 election. “If we look at how other actors have done this, they are building capacity. Now they’re building accounts that are covert.”
Natural disasters have often been the focus of disinformation campaigns, allowing bad actors to exploit emotions to accuse governments of shortcomings, either in preparation or in response. The goal can be to undermine trust in specific policies, like U.S. support for Ukraine, or more generally to sow internal discord. By suggesting the United States was testing or using secret weapons against its own citizens, China’s effort also seemed intended to depict the country as a reckless, militaristic power.
“We’ve always been able to come together in the wake of humanitarian disasters and provide relief in the wake of earthquakes or hurricanes or fires,” said Mr. Smith, who is presenting some of Microsoft’s findings to Congress on Tuesday. “And to see this kind of pursuit instead is both, I think deeply disturbing and something that the global community should draw a red line around and put off-limits.”