spoonai
TOPDefensePolicyOpenAI

Pentagon picks 8 Big Tech firms for AI — and leaves Anthropic out

DoD signed classified-network AI contracts with SpaceX, OpenAI, Google, MS, Nvidia, AWS, Oracle, Reflection. Anthropic was cut for insisting on safety guardrails — and the White House just reopened the door.

·6분 소요·CNNCNN
공유
Pentagon AI deals diagram — 8 Big Tech logos vs. Anthropic
Source: Department of Defense

8 vs 1

Eight companies on the list. SpaceX, OpenAI, Google, Microsoft, Nvidia, AWS, Oracle, Reflection. One company off it: Anthropic — for insisting on safety guardrails in military AI. Then in the first week of May, the White House quietly reopened talks. In under a year, "the company that was cut" became "the company being recalled."

This isn't a procurement story. It's a fork in how — and on whose terms — AI gets weaponized.

The players — Pentagon and the eight

The U.S. Department of Defense (DoD) is the world's largest single buyer, with an annual budget over $800B. The new contracts apply specifically to AI tools that run inside classified networks — model weights deployed in SCIFs (Sensitive Compartmented Information Facilities) where the most sensitive intel work happens.

These eight companies were already a cloud-and-silicon cartel for DoD. AWS, Microsoft, Google, and Oracle split the JWCC cloud contract. Nvidia is the de facto silicon supplier. SpaceX runs Starshield (military Starlink). OpenAI and Reflection cover frontier LLMs and agents. The whole stack is in — except the safety team.

Defense Secretary Pete Hegseth, in office since January, has pushed an "accelerate AI adoption" line throughout the spring. The eight-way deal is the operational result. The Trump administration also rescinded Biden-era military AI guidelines, signaling that long safety reviews are no longer welcome.

[IMG#1]

Why Anthropic was out

Anthropic had its own government play. Claude Gov is a dedicated line of models for U.S. national-security customers. But Anthropic also baked something into its terms: hard limits on assisting with mass casualty, nuclear, biological, or large-scale cyber operations.

Some Pentagon offices read those clauses as too broad — they argued targeting-assist tasks could trip the guardrail. Hawks inside the administration framed it as "political censorship," and during the joint procurement round, Anthropic was dropped.

Timing matters. The exclusion call happened late last year. By the first week of May, the White House signaled it wanted talks back on. Two things changed in between. One, Claude 4.5 and Sonnet 4.6 closed the gap on coding and agent benchmarks against OpenAI. Two, Anthropic crossed 97M MCP installs, making its tool-call standard the de facto agent backbone — even for OpenAI and Google models.

The contract structure

Item The 8 Anthropic (cut → recalled)
Scope Classified network AI tools Same category, separate negotiation
Guardrails Per-vendor TOS, DoD bilateral carve-outs Hard limits in TOS, no military exemption
Cloud backbone JWCC 4 + Oracle Bedrock (AWS), Vertex (Google)
Models GPT-5.4, Gemini 3.1, Llama gov variants, Reflection-1 Claude Gov line
Estimated value Multi-billion, multi-year, multi-agency Undisclosed; possible side deal

The table makes the point: Anthropic wasn't excluded for capability. It was excluded for one clause in its terms of service. None of the other eight have a comparable hard line.

Who wins what

OpenAI wins the biggest single government customer in the world. Sam Altman personally testified at DoD director-level panels last year arguing that "frontier AI is a U.S. national security asset." This deal is the payoff.

SpaceX and Nvidia get to lock the pipes. With Starshield owning battlefield comms and Nvidia's H200/B200 variants becoming the GPU baseline, Elon Musk and Jensen Huang become the default infrastructure under any future defense AI program.

Reflection is the surprise. A new LLM startup that emerged last fall took the eighth slot — the slot Anthropic would have held. The signal: there is now a "non-Anthropic frontier model" supplier the Pentagon endorses.

[IMG#2]

Lessons from past clashes

This isn't the first Pentagon-Silicon Valley collision. In 2018, Project Maven saw 4,000 Google employees petition to drop a military computer-vision contract; Google declined to renew. Palantir and Anduril stepped in.

In 2023, Microsoft faced internal protest over its HoloLens military contract — but kept the deal. In 2024, an Anthropic-Palantir-AWS partnership stood up Claude in IL6 environments, the precursor to Claude Gov.

Three lessons. (1) Strong companies absorb internal dissent — that's why Google in 2018 and Microsoft in 2023 ended up at different places. (2) A guardrail clause is one line in legal text but billions in political cost. Anthropic protected the line and lost six months of revenue. (3) When the model-quality gap closes, the political pendulum swings back to the holdout — which is exactly what's happening now.

Counter-moves

Anthropic's counter has two prongs. First, lean harder into Claude Gov for the IC channel — NSA and CIA IL6 environments are separate from this DoD contract. Second, entrench MCP as the universal agent-tool standard, so even OpenAI- or Google-served deployments still run on Anthropic's protocol.

OpenAI's counter is to lock the floor: it has floated a proposal to build dedicated AI supercomputing inside government facilities, importing the Stargate footprint into classified sites.

Google and Microsoft are positioning for the next JWCC RFP rounds, where cloud share — not model wins — represents the longer-lived asset.

Stakes

  • Wins: OpenAI — largest government anchor customer; classified-network access.
  • Wins: SpaceX, Nvidia — comms and silicon become defense default.
  • Loses: Anthropic — six months of revenue, but brand reinforced as "safety-first."
  • Loses: Safety advocates — diminished policy leverage as guardrails are rolled back.
  • Watching: White House — the terms of the Anthropic recall set the template for every future defense AI contract.

Skeptical view

Helen Toner (Georgetown CSET): "An eight-vendor deal without a safety partner trades short-term speed for long-term political fragility — one incident could freeze the program in Congress."

A second critique from Gary Marcus (NYU emeritus): "Hallucination remains common in production LLMs. The standard reliability assumptions break down when a hallucination chains directly into a kinetic decision."

What changes for you

For builders — defense-adjacent SaaS now has eight, not six, model providers to certify against. Build on JWCC clouds plus Oracle to be eligible for IL5/IL6 RFPs.

For founders — Reflection's slot proves a new LLM company can leapfrog into the top tier on political trust alone. Track allied-RFP categories where U.S. partners co-author the requirements.

For investors — defense AI is now a clearer bucket. PLTR, ANDR (Anduril), and RKLB benefit directly; NVDA and SPCE-equivalent rocket suppliers benefit through the supply chain.

For end users — the second-order effect is your TOS. If consumer ChatGPT or Gemini begins importing "national security carve-outs" from defense contracts, expect quiet TOS edits on civilian products too.

3-Line Summary

  • DoD signed classified-network AI deals with eight Big Tech firms; Anthropic was excluded.
  • Reason was Anthropic's safety guardrail clauses; rivals closed the model gap, prompting White House recall talks.
  • The eight take near-term revenue; Anthropic compounds leverage via MCP and the IC channel.

References

관련 기사

무료 뉴스레터

AI 트렌드를 앞서가세요

매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.

매일 30개+ 소스 분석 · 한국어/영어 이중 언어광고 없음 · 1-클릭 해지