Remember when your biggest browser worry was accidentally clicking a sketchy ad? Well, the browser company Brave just exposed a vulnerability in Perplexity’s Comet browser that security experts are calling the “Lethal Trifecta”: When AI has access to untrusted data (websites), private data (your accounts), and can communicate externally (send messages).
- Researchers discovered they could hide malicious instructions in regular web content (think Reddit comments or even invisible text on websites).
- When users clicked “Summarize this page,” the AI would execute these hidden commands like a sleeper agent activated by a code word.
- The AI then followed the hidden instructions to:
- Navigate to the user’s Perplexity account and grab their email.
- Trigger a password reset to get a one-time password.
- Jump over to Gmail to read that password.
- Send both the email and password back to the attacker via a Reddit comment.
- Game over. Account hijacked.
Here’s what makes this extra spicy
This “bug” is actually a fundamental flaw in how AI works. As one security researcher put it: “Everything is just text to an LLM.” So your browser’s AI literally can’t tell the difference between your command to “summarize this page” and hidden text saying “steal my banking credentials.” They’re both just… words.
The Hacker News crowd is split on this. Some argue this makes AI browsers inherently unsafe, like building a lock that can’t distinguish between a key and a crowbar. Others say we just need better guardrails, like requiring user confirmation for sensitive actions or running AI in isolated sandboxes.
Why this matters
We’re watching a collision between Silicon Valley’s “move fast and break things” mentality and the reality that “things” now includes an agent who can access your bank account. And the uncomfortable truth = every AI browser with these capabilities has this vulnerability. Why do you think OpenAI only offers ChatGPT Agent through a sandboxed cloud instance right now?
Now, Perplexity patched this specific attack, but the underlying problem remains: How do you build an AI assistant that’s both helpful and can’t be turned against you?
Brave suggests several fixes
- Clearly separating user commands from web content.
- Requiring user confirmation for sensitive actions.
- Isolating AI browsing from regular browsing.