
Google’s AI-powered security system known as Big Sleep identified a critical vulnerability in SQLite and blocked hackers from exploiting it before any damage occurred.
Cataloged as CVE-2025-6965, the flaw was an undisclosed zero-day that had not been publicly revealed. Google stated it was the only organization to detect the issue, using its advanced AI platform, which not only uncovered the weakness but also predicted that it was likely to be weaponized.
How Google worked with humans and AI to stop these hackers
According to Google, its threat intelligence team observed subtle indicators suggesting an imminent exploit attempt. Although analysts could not immediately isolate the precise weakness, they relayed the preliminary signals to Big Sleep for deeper analysis. From there, the AI examined the codebase and uncovered the vulnerability that attackers were preparing to target.
The bug, which affects SQLite versions before 3.50.2, enables attackers to manipulate SQL statements capable of triggering an integer overflow and accessing unintended memory regions — a known tactic for causing system crashes or leaking sensitive data.
“We believe this is the first time an AI agent has been used to directly foil efforts to exploit a vulnerability in the wild,” said Kent Walker, president of Global Affairs at Google and Alphabet, in a company blog post.
While Google has not disclosed who the attackers were or how far their plans had progressed, the company said the AI agent’s speed and accuracy were instrumental in stopping the potential breach.
The announcement comes as Google prepares to participate in major cybersecurity conferences this summer, including Black Hat USA and DEF CON 33.
Google’s vision: AI as cyber guardian
In the same blog post, Google outlined a broader strategy for AI-driven cybersecurity, including tools like Timesketch, an AI-powered forensics platform, and FACADE, an insider threat detection system that’s been running inside Google since 2018.
The company also announced plans to contribute data from its Secure AI Framework (SAIF) to support security initiatives like the Coalition for Secure AI (CoSAI).
Big Sleep’s early success demonstrates how AI, when used responsibly, can not only augment human security teams but also anticipate and neutralize threats more rapidly than traditional methods.
“These cybersecurity agents are a game changer, freeing up security teams to focus on high-complexity threats, dramatically scaling their impact and reach,” Walker stated, adding that Big Sleep has “exceeded expectations” in vulnerability research.
Want to see how Google’s other AI agents are reshaping cybersecurity? Explore our coverage of Jules AI’s public beta launch and what it means for enterprise strategies.