Hackers working for nation-states have used OpenAI systems in creating their cyber attacks, according to research published Wednesday by OpenAI and Microsoft.
The companies believe their research, published on their websites, documents for the first time how hackers linked to foreign governments are using generative artificial intelligence in their attacks.
But instead of using ai to generate exotic attacks, as some in the tech industry feared, hackers have used it in mundane ways, such as composing emails, translating documents and debugging computer code, the companies said.
“They are just using it like everyone else does, to try to be more productive in what they're doing,” said Tom Burt, who oversees Microsoft's efforts to track and disrupt major cyberattacks.
Microsoft has committed $13 billion to OpenAI, and the tech giant and the startup are close partners. They shared threat intelligence to document how five hacking groups with ties to China, Russia, North Korea, and Iran used OpenAI technology. The companies did not say what OpenAI technology was used. The startup said it had closed its access after learning of the use.
Since OpenAI launched ChatGPT in November 2022, technology experts, the press, and government officials have been concerned that adversaries could weaponize the most powerful tools, looking for new and creative ways to exploit vulnerabilities. Like other things with ai, the reality could be more underestimated.
“Are you providing something new and novel that is speeding up an adversary beyond what a better search engine could? I haven't seen any evidence of that,” said Bob Rotsted, who leads cybersecurity threat intelligence for OpenAI.
He said OpenAI limited where customers could sign up for accounts, but sophisticated culprits could evade detection using various techniques, such as masking their location.
“They sign up like anyone else,” Rotsted said.
Microsoft said a hacking group connected to the Islamic Revolutionary Guard Corps in Iran had used artificial intelligence systems to research ways to bypass antivirus scanners and generate phishing emails. The emails included “one purporting to come from an international development agency and another attempting to attract prominent feminists to a feminism website created by attackers,” the company said.
In another case, a Russian-affiliated group trying to influence the war in Ukraine used OpenAI's systems to conduct research on satellite communication protocols and radar imaging technology, OpenAI said.
Microsoft tracks more than 300 hacking groups, including cybercriminals and nation-states, and OpenAI's proprietary systems made it easy to track and disrupt their use, executives said. They said that while there were ways to identify whether hackers were using open source ai technology, the proliferation of open systems made the task difficult.
“When work is open source, you can't always know who is implementing that technology, how they are implementing it, and what their policies are for responsible and safe use of the technology,” Burt said.
Microsoft did not discover any use of generative ai in the Russian attack on senior Microsoft executives that the company revealed last month, it said.
Cade Metz contributed reporting from San Francisco.