Openthropic: Is Anthropic now the company OpenAI aspired to be?

This week, OpenAI called another “Code Red” about its business. OpenAI’s CEO of applications Fidji Simo said OpenAI was “distracted by side quests” and lacked focus on the enterprise-business front, according to the Wall Street Journal. (The first Code Red was back in Dec. 2025 in response to Gemini 3’s rise.)

This Code Red was in response to, not Google, but to upstart Anthropic.

Simo pointed to competitor Anthropic’s rise in the enterprise market and computer programming as a “wake-up” call for OpenAI.

Wake-up for sure.

But the drama that is unfolding may be larger than simply revenues and competition. Instead, one cannot help but wonder whether Anthropic is now the company that OpenAI originally aspired to become? An innovative AI startup that is both business-focused and guided by its principles and values–called Claude’s Constitution.

Anthropic’s Focused Business v. OpenAI’s Kitchen Sink Approach

Anthropic’s business model is more focused, for starters.

Anyone who has been paying attention to the exponential rise of Anthropic’s Claude this past year will not be surprised by Simo’s assessment. She’s right. But she is simply stating facts. Claude is the industry leader for software developers and enterprise customers. And it may soon eat into ChatGPT’s popularity with consumers.

Compare Antropic’s laser-focused business model with OpenAI’s business model. We already flagged concerns about OpenAI’s “kitchen sink approach” back in January: “In terms of its business model, OpenAI appears to be massively spending on a wide array of projects, from ChatGPT to Sora, from apps geared to individual to, more recently, apps for enterprise. A kitchen sink approach. By contrast, Anthropic has focused more narrowly on Claude Code and enterprise applications, such as for writing computer code.”

According to the WSJ, “Simo told staff Anthropic’s success should serve as a ‘wake-up call’ for the company, and that it had to regain the lead among software developers and enterprise customers.”

Whether OpenAI can regain focus remains to be seen–it says it will develop a “Superapp” to streamline its offerings for desktop use. But OpenAI apparently plans to deploy “Adult Mode” to allow X-rated content on ChatGPT. What could go wrong?

On the business front, Anthropic appears to have the more coherent strategy, focusing on enterprise applications and AI agents.

Anthropic’s Stand v. U.S. Department of War

Next consider the issue of principles and values. OpenAI was founded as a nonprofit with some aspiration of developing AI for the public good.

On Dec. 11, 2025, OpenAI announced: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

Flash forward to Feb. 27, 2026.

OpenAI’s call of a “Code Red” about its unfocused business ironically comes amidst an even greater controversy–and public relations nightmare–for OpenAI involving its signing of a military contract with the U.S. Department of War on Feb. 27, 2026

But OpenAI’s military contract only came after Anthropic took a stand against the U.S. Department of War’s demand to a change in the existing contract with Anthropic. Anthropic refused to the changes that would have allowed the U.S. military to use Anthropic’s AI for 2 purposes that Anthropic opposed: (1) mass surveillance and (2) autonomous weapons.

The Department of War terminated its contract with Anthropic and declared that its AI was a “supply chain risk” to national security. Anthropic has filed 2 lawsuits to challenge that designation.

OpenAI’s agreement with the Department of War sparked a QuitGPT boycott among ChatGPT users. One estimate claimed that at least 1.5 million people canceled their ChatGPT account–probably signing up with Claude.

Some people even protested outside OpenAI.

Altman was even confronted at a post-Oscars event by Jeremy O. Harris, the playwright, who reportedly said to Altman: “I don’t know how you can comfortably look at yourself in the mirror, knowing you just gave your technology over to a department that called themselves the Department of War, and which just killed 175.”

And further: “You came out into the world, used our tax dollars in a nonprofit that espoused its goal to save humanity, to be this bright beacon for the future and hope.”

A couple days later, Harris admitted he “had a few too many martinis” and corrected a reference he had made to Joseph Goebbels (when he said he meant Friedrick Flick).

Altman reportedly took the rebuke in stride.

Yet, the criticism of OpenAI selling out its nonprofit, humanitarian mission surely must have stung.

OpenAI’ Original Plan as a Nonprofit

The kerfuffle with Harris raises an issue that has dogged OpenAI.

There’s a perception that OpenAI started out with a nonprofit mission for building AI for the public good, only later to abandon it.

Harris has company of prominent people who espouse that view.

Elon Musk in his fraud suit against Sam Altman and OpenAI alleges that OpenAI’s pitch to be a nonprofit was a fraud to get Musk to donate considerable money, time, and his involvement to start OpenAI. That suit goes to trial on April 27 after Judge Gonzalez-Rogers found sufficient evidence to send it to trial.

Karen Hao in her bestseller Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI does not speak directly to Musk’s fraud claim. But the book includes a scathing critique of OpenAI’s abandoning its nonprofit mission: “Founded as a nonprofit with safety enshrined as its core mission, the organization was meant, its leader Sam Altman told us, to act as a check against more purely mercantile, and potentially dangerous, forces.” (book jacket) But instead “OpenAI became everything that it said it would not be.” (book at p. 14)

In OpenAI’s defense, let me add: the amount of capital needed to develop AI technology, including one that could compete with Google, probably would not work with a nonprofit alone. That need for capital is discussed in Hao’s book and in the email evidence in the Musk lawsuit. And I have not seen anyone dispute the need for capital to develop AI (that is not simply distilled from existing models). That’s why OpenAI added a for-profit investment arm and then eventually converted its complicated structure to include a public benefit corporation.

That’s what Anthropic is. A public benefit corporation.

OpenAI employees created Anthropic

And, of course, it cannot be lost that Anthropic was created by 8 employees of OpenAI who left the company to start their own startup.

Dario Amodei, the CEO and co-founder of Anthropic, said they left because they believed in a need for incorporating AI safety or alignment with the scaling of compute, to develop better AI models.

Leave a comment