Sam Altman's Twenty-Four Hours: The Pentagon said "no" twice, but only one was serious

By: blockbeats|2026/02/28 23:00:01
0
Share
copy

On the morning of February 28th Beijing time, Sam Altman tweeted: "Tonight, we reached an agreement with the U.S. Army to deploy our model into their classified network."

Sam Altman's Twenty-Four Hours: The Pentagon said

Rewind about twelve hours to the evening of February 27th Beijing time. It was still him, sitting in front of the CNBC Squawk Box camera, calmly saying: "For Anthropic, despite many disagreements between us, I fundamentally trust this company. I believe they truly care about security." He also said: "I don't think the Pentagon should be using the Defense Production Act to threaten these companies."

In less than twelve hours, from the same mouth, two different statements. What happened in between is worth explaining.

Two Similar Terms, Two Different Outcomes

Let's first put the core content of the two contracts side by side.

Anthropic's request: Claude shall not be used for mass surveillance of U.S. citizens, shall not be used for autonomous weapons systems without human intervention.

When Altman announced the agreement in his tweet, he cited two identical principles: "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and humans should always be present in high-risk automated decision-making." He also wrote: "The Pentagon has accepted these principles, will reflect them in law and policy, and we have written them into the contract."

The wording is almost identical.

One company was banned, labeled as a "supply chain risk," and personally called a "woke far-left company" by Trump on Truth Social. The other company received a contract, entered the Pentagon's classified network, and Sam Altman used the phrase "reached an agreement" in his tweet—calm, business-like, as if a routine B2B transaction had been completed.

This is the question the entire article aims to answer: with similar terms, why are the outcomes completely different?

The answer is not in the terms but in the logic behind the terms.

There is a key background fact worth clarifying first: Anthropic is the only one of these four companies (the other three being OpenAI, Google, xAI) permitted to have AI access to the Pentagon's classified network. OpenAI's original contract only covered non-classified daily office scenarios. This negotiation was essentially about OpenAI wanting access to the classified network, and the Pentagon's condition for entry was the contentious "for all lawful purposes" clause. Anthropic was already inside, but the Pentagon demanded that the security gate it had put up when entering be taken down.

The Pentagon Doesn't Care What Is Written; It Cares Who Said It

To understand this, you first need to understand what Anthropic's Dario Amodei said in that public letter.

He wrote: "Anthropic understands that military decisions are made by the Pentagon, not private companies. We have never objected to specific military actions, nor have we attempted to temporarily restrict the use of our technology."

Then he took a different tone: "However, in very few cases, we believe AI may undermine rather than defend democratic values. The threat will not change our position: we are conscience-driven and cannot accept their requests."

What does this statement mean in contract language? Anthropic demands to have the principles written into the contract terms, forming a hard constraint. If the other party breaches it, they have the right to refuse further service.

What does the Pentagon hear? A private company telling the government's military: in certain situations, I may choose not to follow your orders, and I define the boundaries.

This is unacceptable to any military. Not because they actually want large-scale surveillance, but because the very question of "who has the authority to decide" is the most sensitive nerve in the military command system. Military acquisition lead Jerry McGinn put it bluntly: military contractors usually have no authority to dictate how the Pentagon can or cannot use their products, "otherwise, every contract would have to discuss specific use cases, which is not practical."

OpenAI gave a completely different answer.

In a memo, Altman told employees that OpenAI would propose to the Pentagon to build its own "safety stack," a multi-layered protection system consisting of technical controls, policy frameworks, and human oversight, embedded between AI models and actual use. OpenAI also mentioned that it could deploy researchers with security clearances into classified networks to monitor AI behavior continuously; models would only be deployed in the cloud, not in edge systems like drones.

In translation: you oversee, you supervise, you see everything that happens. If issues arise, we share responsibility, and you don't come to me for explanations.

"Rules are set in stone, and I execute" versus "I embed, you oversee" are two completely different power dynamics, and the Pentagon only accepts the latter.

What OpenAI Does Best Is Exactly What the Pentagon Wants

There's an uncomfortable irony that needs to be spelled out here.

OpenAI has already practiced the "technical transparency" and "continuous oversight" it promised to the Pentagon on its own users.

In August 2025, OpenAI quietly unveiled a new monitoring mechanism in an official blog post about a user mental health crisis: When the system detects a user "planning to harm others," the conversation is shifted to a dedicated channel overseen by a trained human review team authorized to escalate cases to law enforcement. This was proactively disclosed by OpenAI but buried in a mid-length piece on mental health, met with a muted response.

In February 2026, just before the signing of this contract, OpenAI launched an ad system and updated its privacy policy to make one thing clear: Free and Basic plan users engaging with ChatGPT will undergo "in-session contextual analysis" to show relevant ads based on the conversation topic. For example, if you're discussing recipes, you might see ads for food delivery services. OpenAI emphasizes that the conversation content itself is not shared with advertisers, but the analysis is happening in real-time. The ads began testing on February 9.

In November 2025, OpenAI's third-party analytics provider Mixpanel was breached, exposing some API users' names, emails, approximate location, operating system, and browser information. OpenAI subsequently terminated its relationship with Mixpanel, and lawsuits are pending. The incident primarily impacted API developer users, with regular ChatGPT users affected being those who had submitted help center tickets through the platform.

This is the company that pledged to the Pentagon "technical transparency, continuous oversight, let you see everything happen."

What it does best is let others see in because it's used to treating its users this way.

Anthropic believes rules can constrain users; OpenAI believes embedding its own people is more effective than any terms. The former is idealistic compliance logic, the latter is realistic influence logic. The Pentagon chose the latter because it's more familiar and controllable to them.

What Transpired in the Hours in Between?

Fast forward to the early morning of February 28 Beijing time, 5:01 p.m. on February 27 Eastern Time.

The deadline for Anthropic has arrived. Dario did not compromise. Trump announced a ban on Truth Social, Hegseth categorized Anthropic as a "supply chain risk" on X, and Anthropic declared it would respond through legal means.

A notable statement in Hegseth's declaration was: "Anthropic's position is fundamentally incompatible with American principles." Then, in the same statement, he mentioned that Anthropic could continue to provide services to the Pentagon "for no more than six months to ensure a smooth transition." In other words, they just qualified a company as a national security risk but are still using that company's products. This logical contradiction has not been directly addressed by anyone.

Hours later, Altman posted that tweet.

Reflecting on his earlier remarks at the all-hands meeting that day: he expressed his hope that OpenAI could "help cool the situation" and find a solution that could "set a framework for the entire industry." This was not the tone of someone waiting passively.

This is not the first time Silicon Valley has seen this kind of maneuver.

In 2023, OpenAI's nonprofit board dismissed Altman for being "less than candid," citing his overly rapid pace and communication issues. Five days later, Altman returned with a collective support letter from employees, leading to the dissolution of the board. He then spearheaded the company's transition from a nonprofit structure to a for-profit entity, incorporating the former nonprofit mission into a new legal framework.

This time, it was termed "finding a common framework."

What did Dario lose?

Jerry McGinn, Director of the Center for Government Contracting at the U.S. Strategic Institute, provided a sober assessment of the situation: "This is excellent PR for Anthropic, and they don't even need that $200 million."

Financially, this assessment is accurate. In 2025, Anthropic generated $14 billion in annual revenue with a valuation of $380 billion, and the largest shareholder, Amazon, is unlikely to reconsider its investment logic due to a mere government ban. Legal action could bring a turning point. The IPO valuation is unlikely to suffer significant damage; the narrative of "resisting government pressure, upholding security principles" probably cannot be achieved with any PR budget.

But there was one thing: Dario lost.

The practical use standard of AI in the military domain will be set internally at the Pentagon by OpenAI, not Anthropic. That embedded safety stack within classified networks, those OpenAI researchers holding security clearances, that monitoring system tracking AI behavior in real time. They will, over the next few years, grow into de facto industry norms.

Anthropic held its ground on principle, but lost its seat at the rule-making table.

And the one who took that seat was the same person who, in less than twelve hours, went from "support" to "sign."

The most ironic part: the company that took AI safety most seriously was kicked out of the place that needed AI safety the most.

The company that took its place had just rolled out a mechanism two weeks ago to feed user conversations into the ad system, experienced a third-party data breach three months ago, and earlier quietly disclosed a system for scanning conversations that could report users to law enforcement.

In Silicon Valley, Altman's less-than-twelve-hour move has a name. It's not called betrayal; it's called timing.

You may also like

Popular coins

Latest Crypto News

Read more