Critical Threat: Gemini AI Brain Targeted in Attack Using 100,000 Prompts - Here are the Facts
Threats to artificial intelligence security have entered a new phase. Google has just disclosed a large-scale cyberattack targeting their flagship AI model, Gemini.
This attack was not aimed at disrupting systems, but rather at stealing the “brain” or internal logic of Gemini to create cheaper counterfeit models.
Based on a recent report from Google Threat Intelligence Group (GTIG) released on Thursday (12/2), perpetrators launched what is called model extraction attacks or distillation attacks.
The attack was conducted systematically by sending more than 100,000 prompts (text commands) to the Gemini system. The aim was to collect responses in massive quantities to map the behaviour, reasoning processes, and internal logic of Gemini.
The accumulated data was then used to train a new AI model. Using this method, perpetrators could possess an AI with capabilities equivalent to Gemini without having to spend trillions of rupiah on research and development costs. GTIG noted that this attack intensively targeted Gemini’s capabilities in non-English languages.
“Model extraction attacks occur when perpetrators use legitimate access to systematically examine a mature machine learning model to extract information for training a new model,” GTIG explained in their report.
Notably, GTIG confirmed that this action was not an attack from state-sponsored hacker groups (Advanced Persistent Threats/APT). It was purely an attempt at intellectual property theft that violated Google’s Terms of Service.
Google successfully detected and mitigated this campaign, including deactivating related accounts and strengthening security controls on Gemini to prevent further misuse through their API.
This phenomenon has become increasingly common since the final quarter of 2025. The heating up of AI industry competition has triggered more aggressive technology theft risks. John Hultquist, Chief Analyst at GTIG, compared Google to a canary in the coal mine or an early indicator for other AI companies.
If a giant like Google can become a target, other AI companies are predicted to face similar threats. Beyond model extraction, the report also highlighted a trend of criminal groups using AI to create malware, conduct reconnaissance, and launch far more realistic phishing attacks.
Experts are warning that these illegal distillation attacks threaten massive investments in the AI sector that reach astronomical values. Google now emphasises the importance of strict API monitoring and the application of real-time defences to protect the integrity of AI models from illegal cloning attempts.