Google has identified state-sponsored hackers from Iran, China, Russia, and North Korea using its Gemini AI for espionage, although its safeguards have blocked malware generation.
Key Points at a Glance:
- Google reports Iranian, Chinese, Russian, and North Korean operatives leveraging Gemini AI for various intelligence activities.
- Iranian actors accounted for 75% of all observed misuse, with APT42 notably crafting phishing content.
- North Korea’s hackers used Gemini for job applications to infiltrate Western IT firms.
- Google states its AI models blocked attempts to generate malware or exploit its services.
- Concerns grow over AI’s role in cyber operations and the need for stricter regulations.
Google has revealed that state-backed hackers from Iran, China, Russia, and North Korea have been using its Gemini AI for intelligence gathering, with Iran being the most frequent user among them. According to Google’s Threat Intelligence Group (TIG), these actors have attempted to manipulate the AI for activities such as crafting phishing lures, researching vulnerabilities, and developing cyber strategies. However, Google asserts that its security measures have successfully blocked attempts to generate malware or execute direct cyberattacks.
Iranian cyber groups were identified as the most aggressive users of Gemini, accounting for 75% of all observed activity among these four nations. Google’s report highlights that at least ten Iran-backed hacking groups, including APT42, used the AI for reconnaissance, Android-related security research, and phishing content development. APT42 alone contributed to 30% of Iranian-linked AI activity, primarily using Gemini to create highly tailored social engineering campaigns.
Chinese state-sponsored hackers were also caught leveraging Gemini AI for intelligence gathering, particularly in researching U.S. government institutions and Microsoft-related systems. Google identified 20 Chinese cyber groups using the AI model, often focusing on content creation and translation for espionage purposes. While China remains an active user of AI for cyber operations, it is unclear whether it relies on domestic LLMs alongside Gemini.
North Korean cyber operatives, known for their cyber heists and espionage, used Gemini AI in a different manner—crafting job applications for IT professionals. This aligns with North Korea’s ongoing efforts to insert its operatives into Western companies for intelligence and financial exploitation. Additionally, Google reports that North Korean actors sought information on freelancer forums, South Korean military research, and nuclear technology.
Russian cyber groups were noted as relatively low users of Gemini AI, with only three identified operations. Google speculates that Russian hackers may be relying on domestic AI systems or employing stricter operational security measures to avoid detection. Interestingly, 40% of Russia’s AI-related activity was linked to operatives associated with the late oligarch Yevgeny Prigozhin, including groups tied to the Wagner Group. These actors reportedly used Gemini to rewrite content for influence campaigns and pro-Kremlin propaganda, tactics previously employed by Russia’s Internet Research Agency.
Despite these activities, Google maintains that its AI model is not a game-changer for cyber operations. The company asserts that while AI can enhance threat actors’ efficiency, it has not enabled them to develop novel cyber capabilities beyond their existing expertise. Google claims its AI guardrails have successfully blocked malicious code generation, even when attackers attempted to circumvent safeguards using known jailbreak techniques.
Google has also detected efforts to abuse its AI for researching ways to exploit other Google services, but these attempts were reportedly blocked. The company emphasized that it continues to refine its security measures and that its DeepMind division is working on new defenses against AI misuse. One such measure includes an evaluation framework to test the AI’s vulnerability to indirect prompt injection attacks.
The revelations from Google’s report underscore the evolving role of AI in cyber operations and raise concerns about how nation-state actors may seek to exploit AI-powered tools. As AI models become more sophisticated, the cybersecurity community is calling for increased transparency, stricter regulations, and international cooperation to prevent AI misuse in cyber warfare. Google’s findings serve as a reminder that while AI holds great promise for innovation, it also poses new challenges in global cybersecurity.