Based on collaboration and information sharing with Microsoft, we took down five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korean-affiliated actor known as Emerald Sleet; and the Russian-affiliated actor known as Forest Blizzard. The OpenAI accounts identified and associated with these actors were terminated.
These actors generally sought to use OpenAI services to query open source information, translate, find coding errors, and perform basic coding tasks.
Specifically:
- Charcoal Typhoon used our services to investigate various cybersecurity companies and tools, debug code and generate scripts, and create content likely to be used in phishing campaigns.
- Salmon Typhoon used our services to translate technical documents, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with encryption, and investigate common ways processes could be hidden in a system.
- Crimson Sandstorm used our services to support scripts related to web and application development, generating likely content for phishing campaigns, and investigating common ways malware could evade detection.
- Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and write content that could be used in phishing campaigns.
- Forest Blizzard used our services primarily for open source research on satellite communication protocols and radar imaging technology, as well as support with scripting tasks.
Additional technical details on the nature of threat actors and their activities can be found in the ai-threats” rel=”noopener noreferrer” target=”_blank”>Microsoft blog post published today.
The activities of these actors are consistent with previous evaluations of the red team. We conducted this in partnership with external cybersecurity experts and found that GPT-4 offers only limited incremental capabilities for malicious cybersecurity tasks beyond what can already be achieved with publicly available tools and without ai technology..