ChatGPT is being weaponized—again. But this time, it’s not lone hackers or shady developers. It’s nation-state cyber groups from Russia, China, and Iran, using AI to fuel their digital espionage campaigns.
Key Points at a Glance
- OpenAI has banned ChatGPT accounts linked to Russian and Chinese threat actors
- AI was used to assist in malware development, debugging, and infrastructure setup
- ScopeCreep malware campaign used ChatGPT to refine code and evade detection
- Chinese groups used AI for system configuration, firewall setup, and app development
OpenAI has revealed that its ChatGPT platform was quietly used by Russian-speaking and Chinese-linked hacking groups to develop malware, fine-tune scripts, and probe sensitive U.S. technologies. The company has since banned the accounts—but the incident reveals just how fast generative AI is being pulled into global cyber conflict.
In its latest threat intelligence report, OpenAI exposed a Go-based malware campaign dubbed ScopeCreep. It involved the use of ChatGPT to develop and refine Windows-based malware. The attackers used a clever trick: creating disposable email accounts, asking ChatGPT a single question to incrementally improve their code, then abandoning the account and repeating the process. The technique maximized operational security and left minimal traces.
That malware, later embedded in a fake video game tool called Crosshair X, infected user systems and began a stealthy process of privilege escalation, data exfiltration, and remote command-and-control. Among its evasive techniques: launching with ShellExecuteW
, using PowerShell to disable Windows Defender, obfuscating code with Base64, and routing communications via SOCKS5 proxies.
It didn’t stop there. Victims’ credentials, tokens, and cookies were harvested and sent to a Telegram channel operated by the attackers. ChatGPT, it seems, had unknowingly helped build the toolset of a digital thief.
On the other side of the globe, Chinese nation-state actors—including APT5 and APT15—were caught using ChatGPT in more strategic, infrastructure-focused ways. Some accounts were used for open-source research and script editing. Others sought AI help in Linux system administration, software packaging, firewall configurations, and Android app development. The goal: to quietly build and maintain digital environments for future attacks.
One particularly concerning use case was a request to build a brute-force FTP login script. Another involved automating social media manipulation—writing code to programmatically post and like content across TikTok, Instagram, Facebook, and X. OpenAI noted these patterns as part of broader influence operations and surveillance efforts.
These are not isolated incidents. OpenAI’s team identified additional accounts linked to cybercrime enterprises posing as “employment platforms” that charged onboarding fees while using ChatGPT to power scam tasks.
What’s clear is that generative AI can dramatically accelerate malicious workflows—from malware development to social engineering. What’s unclear is how often it’s already happening—and what the long-term consequences may be.
For now, OpenAI has taken action. The accounts have been disabled, and detection efforts have improved. But as the arms race between defenders and attackers escalates in cyberspace, AI is becoming a new—and unpredictable—battlefield.
Source: The Hacker News