Highlights:
- Google analysts found APT and IO actors using AI to speed up routine tasks, not create new threats.
- Notably, Iranian APT and IO actors were the top users of Gemini for research and content creation, while Russian APT actors showed limited interaction.
A recent report from Google LLC’s Threat Intelligence Group reveals how advanced persistent threat groups and coordinated information operations actors from countries like China, Iran, Russia, and North Korea are incorporating generative AI into their campaigns. Even if hackers use AI for research, the situation isn’t as severe as it could be.
The report, which examines interactions with Google’s AI assistant Gemini, found that suspected state-sponsored threat actors mainly used it for routine tasks like reconnaissance, vulnerability research, and content creation. Notably, there was no evidence that these hacking groups leveraged Gemini for more malicious activities, such as developing AI-driven attack techniques or circumventing its built-in safety measures.
Google analysts found that rather than using AI to innovate their attacks, APT and IO actors are primarily leveraging it to accelerate routine tasks rather than generate new threats. The report emphasizes that Gemini’s safeguards effectively blocked direct misuse, preventing its use for phishing, malware development, or infrastructure attacks.
The report highlights that Iranian APT and IO actors were the most frequent users of Gemini, mainly for research and content creation, while Russian APT actors had minimal engagement with the AI model. Meanwhile, Chinese and Russian IO actors primarily used Gemini for localization and messaging strategies rather than direct cybersecurity threats.
“For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques,” the report notes.
The report notes that current large language models alone do not significantly enhance cybercriminal capabilities but recognizes that this could shift as AI technology evolves. With the emergence of new AI models and agent-based systems, researchers anticipate that threat actors will keep experimenting with generative AI, highlighting the need for ongoing monitoring and updates to security frameworks.
To address both current and future risks, Google is continuously enhancing Gemini’s security measures and sharing insights with the broader cybersecurity community. The report emphasizes the importance of cross-industry collaboration to ensure AI is used for security rather than exploitation.