
Google says it disrupted a criminal hacking group's bid to weaponize a zero-day flaw built with help from an AI model, the first such case on record.
Google Stops AI-Crafted 2FA Bypass
The Google Threat Intelligence Group, known as GTIG, disclosed the intervention Monday in its latest AI Threat Tracker report.
Researchers found the flaw inside a Python script designed to bypass two-factor authentication on a popular open-source, web-based system administration tool.
Google declined to name the affected vendor or the threat actor.
GTIG said it worked with the vendor to patch the flaw and notified law enforcement before any mass exploitation could begin.
The team flagged telltale traces of machine authorship in the code, including a hallucinated CVSS severity score, educational docstrings, and a textbook Pythonic format consistent with large language model training data. Google added that it has high confidence an AI model assisted the discovery and weaponization, though it does not believe its own Gemini was involved.
Also Read: Tom Lee Calls Crypto Spring As Bitmine Stakes $11.1B In ETH
Experts Warn AI Hacking Era Is Here
John Hultquist, chief analyst at GTIG, called the case tangible evidence of a long-warned threat.
"It's here," Hultquist told reporters. The era of AI-driven vulnerability exploitation has already begun, he added, with visible cases pointing to many more out in the wild.
Security analysts say the flaw type matters as much as the tool used to find it.
The bug was a semantic logic error, a hardcoded trust assumption that traditional fuzzers and static scanners are poorly equipped to catch, but that frontier models can reason through.
Google also documented state-linked groups expanding AI use across the attack chain. North Korea's APT45 has been sending thousands of repetitive prompts to recursively analyze vulnerabilities, while a China-linked actor used a persona-driven jailbreak to push Gemini into researching firmware flaws.
Daybreak And Glasswing Lead Defender Push
The same week Google's findings went public, OpenAI launched Daybreak, a cybersecurity initiative pairing GPT-5.5 and Codex Security to help defenders find and patch flaws.
Daybreak runs on a tiered access system. Verified defenders can use GPT-5.5 with Trusted Access for Cyber, while a more permissive GPT-5.5-Cyber variant covers red teaming and controlled validation.
Sam Altman said OpenAI wants to work with as many companies as possible to continuously secure their software.
Daybreak enters a market already shaped by Anthropic's Project Glasswing, which uses Claude Mythos Preview to scan partner codebases for severe flaws. Apple, Microsoft, Google, Amazon, and JPMorgan Chase have signed on. The competing programs reflect a broader bet that frontier models can tip the balance toward defenders, even as attackers race to do the same.
Read Next: Sui Rallies 37% As Nasdaq Firm Locks Up 2.7% Of Supply