Group sponsored by multiple states is experimenting with GEMINI assistants equipped with Google AI to improve productivity, investigating potential infrastructure for attack or target reconnaissance.
Google’s threat to intelligence group (GTIG) does not develop or implement a new AI -compatible cyber attack that can bypass conventional defense, but mainly use GEMINI to improve productivity. Detected a permanent threat (apt) group.
Threat actors are trying to use AI tools for various successes for attack purposes, as these utility can at least shorten the preparation period.
Google has identified gemini activities related to APT groups in more than 20 countries, but the most prominent groups were from Iran and China.
The most common cases include the support of coding tasks for the development of tools and scripts, research on published vulnerabilities, checking technology (explanation, translation), searching, searching for target organizations, and escalating privileges. Or was searching for the execution method. Internal reconnaissance in the infringed network.
Apts using gemini
Google says that APTS in Iran, China, North Korea, and Russia have all experimented with gemini, discovered security gaps, avoided detection, and investigated the potential of tools to help planning after agricultural activities. Masu. These are summarized as follows:
Iran’s threat actor is the heaviest user of Gemini, a wide range of activities, such as reconnaissance in defense tissues and international experts, research on publicly known vulnerabilities, developing fishing campaigns, and creating content for operating impact. I am using it for it. They also used GEMINI to use technical explanations related to military technology, including cyber security, unmanned aircraft (UAV) and missile defense systems. China’s supported threats are mainly for the reconnaissance of the U.S. forces and government organizations, research on vulnerabilities, script of horizontal movements and privileges, and maintaining the sustainability of the network. I used Gemini. We also investigated how to access Microsoft Exchange using a reverse engineer security tool such as password hash and carbon black EDR. North Korean APTS has been using GEMINI to support multiple stages of attack life cycle, including free hosting providers, conducting target organizations reconnaissance, and supporting malware development and avoidance technology. The considerable part of their activities focuses on the secret IT worker scheme in North Korea, drafting recruitment applications, cover letters, and proposals using gemini, and incorrect identity in Western companies. I secured employment under Russian threats have a minimal relationship with Gemini, and most of the uses focus on script support, translation, and payload craft. Their activities include rewriting the published malware into various programming languages, adding encryption functions to malicious code, and understanding the specific functions of public malware. Limited use may indicate that Russian officials prefer AI models developed in Russia or avoid western AI platforms for operational security reasons.
Google also states that if a threat person tried to use a public jailbreak against Gemini, or if he re -rated a promoter to bypass platform security measures. Masu. According to what is conveyed, these attempts have failed.
Openai, the creator of the popular AI chatbot ChatGpt, made the same disclosure in October 2024. Therefore, Google’s latest reports are provided as a confirmation of large -scale misuse of AI tools generated by threat actors at all levels.
Jailbreak and security bypass are concerns of the mainstream AI products, but the AI market is filled with AI models that gradually lack appropriate protection to prevent abuse. Unfortunately, some of them with trivial restrictions on the bypass enjoy the popularity.
Cyber Security Intelligence Company Kela has recently published details on Deepseek R1 and Alibaba QWEN 2.5 Lax Security Measures.
Unit 42 researchers also demonstrate effective jailbreak technology for Deepseek R1 and V3, indicating that the model is more likely to be abused for evil purposes.