Why are AI leaders fleeing?

5gDedicated

Normally, when big-name talent leaves Silicon Valley giants, the PR language is vanilla: they’re headed for a “new chapter” or “grateful for the journey” — or maybe there’s some vague hints about a stealth startup. In the world of AI, though, recent exits read more like a whistleblower warnings.

Over the past couple of weeks, a stream of senior researchers and safety leads from OpenAI, Anthropic, xAI, and others have resigned in public, and there’s nothing quiet or vanilla about it. 

Take, for example, OpenAI researcher Zoë Hitzig. She chose not to quietly flip her LinkedIn profile but to announce her resignation in a New York Times guest essay entitled, “OpenAI Is Making the Mistakes Facebook Made. I Quit.”  

Who resigns that way — in the Times? 

What ticked her off was OpenAI’s decision to start testing ads inside ChatGPT. Ironically, in 2024 Sam Altman, OpenAI’s CEO, had said, “I hate ads,” arguing that “ads plus AI” … are “uniquely unsettling” because people are forced to figure out who is paying to influence them with the answers. But, hey, when even OpenAI’s internal bean counters expect the company to lose $14 billion in 2026 alone, Altman managed to get over his qualms.

Not so, Hitzig. She wrote, “People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.” (She’s right, of course.)

But, I’m sorry, that’s naive. Facebook didn’t make a mistake. It made billions of dollars by exploiting people who shared things online with families and friends. It’s been a truism in internet business models since the late 2000s that, “If you’re not paying for the service, you are the product.”

Sure, intimate disclosures married to an ad business via AI is a creepy construct. So is Facebook and X’s ability to profit from getting people to fall down the rabbit hole of engagement, outrage, and behavior profiling. But no one’s stopping them. Heck, back in 2016, Facebook shared its data with Cambridge Analytica, allowing the Trump campaign to customize ads at an almost individual level, helping Donald Trump win the 2016 election. 

That eventually cost Facebook about $6 billion dollars in fines and lawsuits. That sounds like serious money until you consider that Facebook’s parent company, Meta, had GAAP revenue in  2025 over $200 billion, almost all of it from advertising. 

I have a feeling Altman will get over his queasy feeling about that kind of revenue.

Meanwhile, over at Anthropic, the departing head of the Safeguards research team, Mrinank Sharma, was even more direct. In a resignation letter shared with the world on X, he wrote that “the world is in peril.” He described, in the sort of polite but pointed language that sets off lawyers, how hard it is in practice for a company to “let our values govern our actions” when the money, the market, and the internal prestige all point toward shipping more capable models, faster. Here, again, we see ethics taking a back seat to profits. 

Mind you, Anthropic is the AI company that branded itself around “constitutional AI” and careful deployment. If senior safety leaders there feel they can no longer put morality ahead of cash, that’s a red flag.

If this were just about two idealistic researchers taking a stand, you could write it off as personality and politics. It isn’t. OpenAI recently disbanded its “mission alignment” team, which was to work on making AI safe. (Don’t forget, OpenAI started as a nonprofit funded by donations and pledges rather than equity investment and dedicated to ensuring artificial general intelligence (AGI) would benefit “all of humanity.”)

Today, OpenAI and Anthropic are positioning themselves for IPOs to realize billions for their owners and eventual shareholders. As anyone who pays attention to the stock market since ChatGPT 3.5 exploded on the scene in 2023 knows, the market is dominated by the AI-driven Magnificent Seven with its $20.2 trillion in market cap. And  you wonder why people are worried about an AI bubble popping!?   

There are other AI leaders headed for the door. At Elon Musk’s xAI — now freshly folded into SpaceX via an all‑stock deal — xAI co‑founders Tony Wu and Jimmy Ba headed to the lifeboats while Musk talked about “reorgs” and how “some people who are better suited for the early stages of a company and less suited for the later stages.” Sure, Elon, sure. 

Meanwhile, VERSES AI’s founders and CEO are out as the board installs an interim leader and pushes a sharper commercial pivot. Even Apple is suffering an “AI brain drain.” There, Senior Vice President John Giannandrea and Siri leader Robby Walker have left for Meta.

Each individual story is different, but I see a thread here. The AI people who were concerned about “what should we build and how to do it safely?” are leaving. They’ll be  replaced by people whose first, if not only, priority is “how fast can we turn this into a profitable business?” Oh, and not just profitable; not even a unicorn with a valuation of $1 billion is enough for these people. If the business isn’t a “decacorn,” a privately held startup company valued at more than $10 billion, they don’t want to hear about it.

I think it’s very telling that Peter Steinberger, the creator of the insanely — in every sense of the word — hot OpenClaw AI bot, has already been hired by OpenAI. Altman calls him a “genius” and says his ideas “will quickly become core to our product offerings.” 

Actually, OpenClaw is a security disaster waiting to happen. Someday soon, some foolhardy people or companies will lose their shirts because they trusted valuable information with it. And, its inventor is who Altman wants at the heart of OpenAI!?

Gartner needs to redo its hype cycle. With AI, we’re past the “Peak of Inflated Expectations” and charging toward the “Pinnacle of Hysterical Financial Fantasies.”

The people leaving before it all goes to hell? They’re the wise ones. Why are AI leaders fleeing? – ComputerworldRead More