When the people who actually build AI start quitting — and sounding alarms on their way out — it's probably worth paying attention.
A pattern has taken shape at the industry's biggest labs. Safety researchers, founding members and senior technical staff keep heading for the exits. At Elon Musk's xAI, roughly half of the original 12-person founding team has departed as of this month, with internal pressure mounting ahead of a potential SpaceX-xAI deal. Anthropic, the company that built its entire brand on responsible AI, lost its head of Safeguards Research, who warned publicly that "the world is in peril." OpenAI has seen a steady stream of safety-focused departures going back to 2023 — so many, at this point, they barely register as news.
These aren't mid-level employees airing grievances on LinkedIn. These are the people whose job it was to make sure powerful AI doesn't go sideways.
What makes the trend harder to dismiss is that the people who stay are raising red flags, too. Boris Cherny, head of Claude and creator of the wildly popular Claude Code at Anthropic, said on the Lenny's Podcast that AI agents will "transform every computer-based job" and that the change will be "painful." "As a society, this is a conversation we have to figure out together," Cherny said. "Anyone can just build software anytime."
The labs, meanwhile, aren't exactly pumping the brakes.
- OpenAI CEO Sam Altman recently said at an AI conference that, given what AI can do, adoption has been "surprisingly slow" — a remarkable framing from a company that keeps losing safety staff.
- Nvidia CEO Jensen Huang, speaking on the No Priors podcast, conceded that "the battle of narratives is being won by the critics" — an unusual admission from the man whose chips power the entire AI boom.
For businesses already deep into AI adoption, the governance picture is sobering. A Thomson Reuters report found that only 41% of organizations made their AI policies accessible to employees or even required acknowledgment that the policies exist. "These policies are just words on paper if they are not understood, embraced, and actively practiced," said Katie Fowler, director of responsible business at the Thomson Reuters Foundation.
Put differently: the people building AI are worried. The people deploying AI at your company may not even know there's a policy for it.
Valley View
The AI industry loves to talk about "alignment" — making sure AI does what humans want. But there's a different alignment problem nobody's addressing: the growing gap between the people who build these systems and the executives racing to ship them. When safety researchers quit faster than you can hire them, that's not a retention issue. It's a canary in the coal mine. The question is whether the people still inside are paying attention.
