spoonai
TOPPentagonDODClassified Networks

Pentagon Clears 8 AI Firms for Classified IL6/IL7 Networks; Anthropic Notably Excluded

On May 1, the Pentagon announced agreements letting NVIDIA, Microsoft, AWS, Google, OpenAI, SpaceX, Oracle, and Reflection deploy AI in classified Impact Level 6/7 environments. Anthropic, which insisted on weapons/surveillance guardrails, was excluded as DOD pushed for unrestricted-purpose language.

·8분 소요·TechCrunchTechCrunch
공유
Pentagon IL6/IL7 classified network AI agreements with eight firms — Anthropic excluded
Source: TechCrunch

Eight Firms In, Anthropic Out — The Pentagon's New AI Procurement Map

Here's the deal: on May 1, the Pentagon announced eight AI firms cleared to deploy on IL6 (Secret) and IL7 (Top Secret) classified networks: NVIDIA, Microsoft, AWS, Google, OpenAI, SpaceX, Oracle, and Reflection. Used for analysis, logistics, and large-scale data processing. Lawyers settled on "unrestricted-purpose AI" language. The headline isn't who's in — it's who's out. Anthropic stuck to its weapons/surveillance guardrails and walked from the deal. The slot Anthropic vacated went to Reflection AI — a $2B-funded NVIDIA-backed Google DeepMind alumni startup. The "AI safety vs. federal procurement" trade-off just had its first real-world test case.

The Players — Pentagon, Eight Firms, Excluded Anthropic, Reflection AI

The Pentagon side runs through DISA (Defense Information Systems Agency) under DOD CIO, joint with the Joint Chiefs and Air Force Cyber Command. IL6 handles SECRET classification, IL7 handles TS/SCI. Both are air-gapped or strictly compartmentalized environments with no internet contact.

The eight firms split roles roughly as follows:

Firm Primary Role Notes
NVIDIA GPU infrastructure + CUDA stack Backed Reflection
Microsoft Azure Government Secret + OpenAI hosting OpenAI backchannel
AWS Secret Region + GovCloud Top Secret Largest infrastructure vendor
Google GCP for Federal Top Secret DeepMind model hosting
OpenAI GPT-5/5.4 + Codex Hosted via Microsoft Azure
SpaceX Starlink Secret + Colossus 1 compute Musk-Pentagon link
Oracle Oracle Cloud Defense Region Added 5/3
Reflection Autonomous reasoning + agents Newcomer

Anthropic walked from the same negotiating table over guardrails. Anthropic's Acceptable Use Policy (AUP) restricts use of Claude for autonomous lethal weapons, mass surveillance/targeting, and CBRN weapons design. Pentagon insisted on "unrestricted-purpose AI" language, and Anthropic refused. The result: Anthropic stays out of IL6/IL7 and pursues less weapons-adjacent procurement (DOE, HHS, USAID).

Reflection AI launched in 2024 — eight Google DeepMind alumni, $2B Series A from NVIDIA and Sequoia. Focus: autonomous reasoning and agents. The IL6/IL7 inclusion at 18 months from founding is record-fast — typical federal procurement entry takes 5-7 years. NVIDIA's political and capital weight made it possible.

Per TechCrunch, the Pentagon signed deals with eight firms (NVIDIA, Microsoft, AWS, Google, OpenAI, SpaceX, Oracle, Reflection) for IL6/IL7 deployment, while Anthropic — insisting on weapons/surveillance guardrails — was left out.

"Unrestricted-Purpose AI" — The Fault Line

The contractual flashpoint is "unrestricted-purpose AI" language. Companies must agree that their normal AUP application restrictions (e.g., no targeting, no surveillance, no CBRN design) will not apply within Pentagon environments. Anthropic refused this premise.

Anthropic's stance: Constitutional AI principles must hold in federal procurement just like everywhere else. Specifically, Anthropic's AUP bars use for (1) autonomous lethal weapons, (2) mass surveillance/targeting, and (3) CBRN weapons design assistance. Pentagon's "AUP void inside IL6/IL7" demand was the deal-breaker.

OpenAI, Google, Microsoft, and others have similar AUP language but accepted "separate negotiation in federal environment" terms — keeping AUPs intact for consumer/enterprise but agreeing to "unrestricted" inside IL6/IL7. This creates a structural "two-tier" AI safety policy that critics could attack.

Actual stated applications: (1) analysis (SIGINT/HUMINT data), (2) logistics (military supply chains), (3) large-scale data processing (reconnaissance imagery, satellite, document classification). Direct lethal targeting and autonomous weapons control are not explicit applications, but the "unrestricted" language allows future expansion.

Who Wins — Pentagon, Eight Firms, Anthropic, AI Safety Camp

Pentagon wins twice. AI infrastructure diversification: eight competing vendors give DOD pricing, performance, and safety leverage. "Unrestricted-purpose" language: legal foundation for military applications without AUP friction.

The eight firms get a federal procurement revenue ramp. IL6/IL7 AI revenue could total $20-40B over five years, with per-vendor quarterly revenue of $0.5-1.5B at 60-70% margins — high operating-income contribution. Federal procurement entry also strengthens "safe vendor" branding, with spillover into commercial enterprise.

Anthropic loses revenue but gains brand. Federal procurement opportunity foregone. But "safety-first" brand strengthens — Anthropic emerges as the only major U.S. lab that says no when guardrails would have to be dropped. That positions Anthropic for premium customer loyalty in regulated industries (finance, healthcare, legal) and for preferred status in allied governments (E.U., Japan, Korea) building AI safety governance.

The AI safety community gets mixed signals. Negative: 7 of 8 firms accepted "unrestricted-purpose AI" language, weakening the safety norm. Positive: Anthropic walking is now a real precedent — "safe AI can refuse federal procurement" is no longer hypothetical.

Past Parallels — Wins and Losses

AWS Secret Region launch (2017): AWS opened IL6 GovCloud Secret Region; federal cloud revenue ramped from $0.5B/quarter to $3B in five years. AI applications could ramp faster — Pentagon AI infrastructure and adoption are running in parallel.

Microsoft JEDI → JWCC (2019-2024): Microsoft lost JEDI, then re-entered via the multi-vendor JWCC contract. "Multi-vendor + price competition" became the Pentagon AI procurement default — this 8-firm structure follows directly.

Google Project Maven boycott (2018): Google participated in military video analysis, then withdrew under employee protest. First public "AI company federal procurement vs. employee/social pushback" trade-off. Anthropic's stance here absorbs the Maven lesson.

Palantir federal procurement controversy (2017-2024): Palantir ramped via ICE/CIA contracts but suffered brand damage from immigration tracking and targeting applications. The "unrestricted-purpose" language could trigger similar controversies for the eight signatories down the line.

Counter-Plays — Anthropic, Allies, AI Safety Camp

Anthropic counters two ways. Other federal channels — DOE, HHS, USAID, NIH — non-weapons procurement that could ramp to $5-10B over five years. "Safe AI for regulated industries" branding — strengthening default positioning in finance, healthcare, and legal.

Allied governments (U.K., E.U., Japan, Australia, Korea) could differentiate by preferring Anthropic in their own procurement. With the U.S. Pentagon accepting "unrestricted-purpose AI," allies have an opening to set tighter governance norms — U.K. AISI, E.U. AI Act, Japan's AI guidance moving toward "respect AUPs in federal procurement."

The AI safety policy camp could push Congress on this. Sen. Markey, Rep. Lieu, and similar voices may introduce legislation requiring federal procurement to honor AI company AUPs. If passed, the IL6/IL7 application scope tightens.

China and Russia read "unrestricted-purpose AI" as both a threat and a justification. U.S. military application of native AI without restraint heightens military application risk and gives Chinese/Russian programs cover to ramp similarly. Five-year horizon: AI arms race takes sharper definition.

What Changes — Devs, Founders, Investors, End Users

Devs: AUP language now matters for real. Anthropic showed AUPs can drive actual refusals. AI companies will draft AUPs more carefully, and employees gain leverage on what their company's AUP says.

Founders: federal procurement is now a viable startup market. Reflection's 18-month entry signals NVIDIA-backed startups can leverage "NVIDIA political access + federal procurement" packaging for fast revenue ramps.

Investors: NVIDIA-backed companies get a valuation premium. Reflection's $2B Series A reflects "NVIDIA can short-cut federal procurement" pricing power. Anthropic's valuation continuing to ramp despite forgone revenue strengthens "safety = premium" thesis.

End users: limited direct impact, but social/civil society debate around "AI companies in military applications" intensifies. AI ethics and governance activism likely accelerates over 12-24 months.

Stakes

  • Wins: Pentagon CIO/DISA — "unrestricted-purpose AI" + 8-firm vendor leverage; Reflection AI — record-fast federal procurement entry; NVIDIA — Reflection backing + GPU infrastructure default across the eight; SpaceX (Musk) — Pentagon + Anthropic compute + Starlink Secret triple stack.
  • Loses: Anthropic — IL6/IL7 revenue forgone but safety-first brand strengthens; Microsoft Azure-OpenAI single-stack — share dilutes among eight vendors; AI safety policy community (FLI, MIRI) — "unrestricted-purpose AI" acceptance dents narrative.
  • Watching: Allied government procurement — Anthropic preferential signaling; U.S. Congress — legislation requiring AUP respect in federal procurement; China/Russia AI labs — military application ramps.

The Skeptics — "Actual Applications Are Analysis/Logistics — Headlines Overread"

Paul Scharre (CNAS) argues actual applications are analytical, logistical, and data processing — not autonomous weapons targeting. "Unrestricted-purpose" doesn't mean instant lethal targeting. DOD's own AI ethics principles (human-in-the-loop) keep applications bounded for now.

Heather Roff (Brookings) flags Pentagon internal governance as the actual variable. With or without company AUPs, DOD's own policies (e.g., bans on nuclear/bio/chem weapons) define application scope. Whether Anthropic's refusal "strengthened safety" or "just lost revenue" depends on how the eight firms' deployments actually play out over 12-24 months.

Two skeptic lines: (1) "unrestricted-purpose" application impact is overread, (2) Pentagon internal governance is the real arbiter. Both balance against a simplistic "Anthropic excluded = AI safety strengthened" reading.

TL;DR

  • Pentagon signed deals with 8 firms (NVIDIA, Microsoft, AWS, Google, OpenAI, SpaceX, Oracle, Reflection) for IL6/IL7 classified network AI on May 1.
  • Anthropic walked over weapons/surveillance guardrails — "unrestricted-purpose AI" was the deal-breaker.
  • Reflection AI's 18-month-from-founding entry is the fastest federal procurement entry on record — NVIDIA backing was decisive.

References

관련 기사

무료 뉴스레터

AI 트렌드를 앞서가세요

매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.

매일 30개+ 소스 분석 · 한국어/영어 이중 언어광고 없음 · 1-클릭 해지