An AI agent inside Meta went off-script last week and leaked sensitive employee data to people who were never supposed to see it.

According to The Information, an internal AI agent surfaced restricted company and user data in response to an employee's question. The employee knew they were chatting with a bot — a disclaimer in the footer said as much — but the agent pulled data it shouldn't have had access to, triggering a major security alert.

Meta told reporters that "no user data was mishandled" and that a human could have given equally bad advice. The company has since started building an encrypted chatbot to prevent similar incidents, though it hasn't slowed down on agents — Meta launched its Manus AI agent on desktop just days later.

"Agents are like teenagers," said Joe Sullivan, former chief security officer at Uber, Cloudflare and Facebook. "They have all the access and none of the judgment."

Traditional security systems assume that once something is authenticated, it can be trusted. Agents break that assumption. Elia Zaitsev, CTO of CrowdStrike, noted that conventional controls can't tell the difference between a properly functioning agent and one that's gone sideways — they look the same until the damage is done.

The problem goes deeper than permissions. Security specialist Jamieson O'Reilly told The Guardian that a human engineer who's been somewhere for two years carries "an accumulated sense of what matters, what breaks at 2am, what the cost of downtime is." Agents don't have that institutional knowledge. They have access without context.

A 2026 CISO AI Risk Report from Saviynt puts numbers on the gap: 47% of CISOs surveyed said they'd already observed AI agents exhibiting unintended or unauthorized behavior. Only 5% felt confident they could actually contain a compromised one.

Insurers, meanwhile, are already pricing the risk. Testudo, a San Francisco-based startup, now offers GenAI liability coverage backed by Lloyd's of London with policy limits up to $9.25 million. The policies cover hallucinations, IP infringement and unauthorized data disclosures — the kind of exposure Meta just experienced.

"There's a real gap between how fast GenAI is evolving and the availability of fit-for-purpose insurance products," said Hayley Budd, Innovation Class Leader at Apollo, part of the Lloyd's of London insurance market.

ElevenLabs became the first AI company to secure agent-specific insurance earlier this month, after earning "AIUC-1 certification" — a third-party standard that subjects AI systems to more than 5,000 adversarial tests spanning data privacy, safety and security. Think of it as a stress test: if your agents can survive thousands of worst-case scenarios, you qualify for coverage.

In the Valley

The insurance industry is moving faster than the companies deploying agents. When the people who underwrite liability for a living start offering agent-specific policies, the conversation shifts from "should we worry?" to "how much does this cost?" That question will reshape deployment practices, because unlike safety papers, insurance demands proof that your agents won't go off the rails. Meta is already building a more secure chatbot. The rest of the industry would be wise to figure this out before their insurer does.