A report issued Wednesday by Google revealed that hackers from various countries, particularly China, Iran, and North Korea, have been utilizing the company’s artificial intelligence-enabled Gemini chatbot to enhance cyberattacks against targets in the United States.
Google’s findings indicate that access to publicly available large language models (LLMs) has increased the efficiency of cyberattackers but has not significantly altered the types of attacks they typically execute.
LLMs are advanced AI models trained on vast amounts of previously generated content, enabling them to identify patterns in human language. This capability allows them to generate high-functioning, error-free computer programs.
According to the report, “Rather than enabling disruptive change, generative AI allows threat actors to move faster and at a higher volume.”
The benefits of generative AI were noted for both low-skilled and high-skilled hackers. However, the report stated, “current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors.” The Google Threat Intelligence Group anticipates that as the AI landscape evolves with new models and systems, the threat landscape will similarly adapt as threat actors integrate new AI technologies into their operations.
These findings align with previous research from other major U.S. AI players such as OpenAI and Microsoft, which reported similar limitations in achieving novel offensive strategies for cyberattacks through the use of public generative AI models.
Google emphasized its commitment to disrupting the activities of threat actors when they are identified.
“AI, so far, has not been a game changer for offensive actors,” said Adam Segal, director of the Digital and Cyberspace Policy Program at the Council on Foreign Relations. “It speeds up some processes. It provides foreign actors with improved capabilities to craft phishing emails and discover code. But has it dramatically changed the game? No.”
Whether this will change in the future remains uncertain, Segal noted. It is also unclear whether advancements in AI technology will primarily assist those building defenses against cyberattacks or the threat actors attempting to outmaneuver them. “Historically, defense has been difficult, and technology has not resolved that issue,” Segal commented. “I suspect AI won’t do that, either. But we don’t know yet.”
Caleb Withers, a research associate at the Center for a New American Security, concurred that an arms race may develop as offensive and defensive cybersecurity applications of generative AI evolve. However, he suggested that they may largely balance each other out. “The default assumption should be that absent certain trends that we haven’t yet seen, these tools should be as useful to defenders as they are to offenders,” he explained. “Any productivity-enhancing tool generally applies equally to both sides, even regarding the discovery of vulnerabilities.”
The report categorizes the types of threat actors utilizing Gemini into two main groups.
Advanced persistent threat (APT) actors are described as “government-backed hacking activity, including cyber espionage and destructive computer network attacks.” In contrast, information operation (IO) threats “attempt to influence online audiences in a deceptive, coordinated manner.” Examples include sock puppet accounts (phony profiles hiding users’ identities) and comment brigading (organized online attacks aimed at altering perceptions of online popularity).
Google’s report identified hackers from Iran as the most frequent users of Gemini across both threat categories. APT threat actors from Iran leveraged the service for various tasks such as gathering information on individuals and organizations, researching targets and their vulnerabilities, translating languages, and creating content for future online campaigns.
Moreover, Google tracked over 20 Chinese government-backed APT actors using Gemini for reconnaissance on targets, scripting and development, requesting translations, and explaining technical concepts, all while attempting to gain deeper access to networks after initial compromises.
North Korean state-backed APTs employed Gemini for similar tasks as their Iranian and Chinese counterparts, but they also seemed to aim at utilizing the service to place “clandestine IT workers” within Western companies to facilitate the theft of intellectual property.
In terms of information operations, Iran was notably the heaviest user of Gemini, accounting for 75% of detected usage. Hackers from Iran utilized the service to create and manipulate content designed to influence public opinion, adapting that content for different audiences.
Chinese IO actors primarily employed Gemini for research purposes, focusing on matters of strategic interest to the Chinese government. Unlike the APT sector, where their presence was minimal, Russian hackers were more prominent in the IO-related use of Gemini, employing it not only for content creation but also for gathering information on developing and using online AI chatbots.
Also on Wednesday, Kent Walker, president of global affairs for Google and its parent company, Alphabet, highlighted the potential dangers posed by threat actors utilizing increasingly sophisticated AI models. He called for collaboration between the industry and the federal government “to support our national and economic security.”
“America holds the lead in the AI race — but our advantage may not last,” Walker stated. He asserted that the U.S. must maintain its narrow advantage in developing the technologies that drive the most advanced artificial intelligence tools. Additionally, he urged the government to streamline procurement rules to facilitate the adoption of AI, cloud, and other transformative technologies within the U.S. military and intelligence agencies, while establishing public-private cyber defense partnerships.