Amazon Just Poured Another $25B Into Anthropic — And Wired 5GW of Trainium to Their Back
On April 20, Amazon committed up to $25B more to Anthropic alongside 5GW of dedicated Trainium compute. Anthropic pledged $100B+ in AWS spend over a decade, with 1GW coming online by year-end on Trainium2/3.

$33B
That is the running total of what Amazon has poured into Anthropic. It started with $4B in September 2023, grew by another $8B in 2024, and now up to $25B more has been locked in. Over the next decade, Anthropic has committed to spending more than $100B on AWS. In return, it gets 5GW — compute equivalent to five full nuclear reactors worth of dedicated AI capacity.
This is not a cloud contract anymore. Anthropic has essentially been welded to Amazon's infrastructure spine, and Amazon has secured the single largest captive customer for its in-house Trainium chips.
The context you need
Anthropic's growth curve is the steepest uncorrected line in AI right now. Its annualized revenue (ARR) went from roughly $9B at the end of 2025 to north of $30B in about six months. More than 1,000 enterprise customers are spending $1M+ per year on Claude — double the 500 it had in February.
That velocity finally broke Anthropic's own backend. The company admitted in its own announcement that "the unspoken rise in demand strained reliability and performance." Translation: traffic is growing faster than model quality, and the fleet couldn't keep up.
Source: unsplash.com · Unsplash License
So AWS stepped in with the fix: not a capacity bump but a dedicated slice of its network, built around its own silicon. The timing is no accident — Amazon has spent three years telling Wall Street that Trainium is real. A $25B Anthropic anchor settles the argument.
Breaking down the core
The 5GW shock
Five gigawatts is not a number thrown around casually in data center reports. For reference, OpenAI's Stargate site in Abilene, Texas, which is considered the largest single AI compute buildout ever planned, is aiming for roughly 1.2GW at first phase. Meta's Hyperion is projected to reach 2GW at peak. Anthropic alone is now contracted for 5GW.
Here is the delivery schedule:
| Milestone | Capacity | Chip family | Timing |
|---|---|---|---|
| Phase 1 online | 1GW | Trainium2, Trainium3 | By end of 2026 |
| Ramp to full commitment | +4GW | Trainium3 and later | 2027–2029 |
| Duration of compute contract | Up to 10 years | Spans multiple generations | 2026–2036 |
A 1GW cluster can train a frontier model roughly the size of GPT-4.5 end-to-end in single-digit weeks. With 5GW, Anthropic has enough headroom to either train multiple frontier models in parallel or serve Claude inference at a scale no competitor currently matches.
Trainium's economics of scale
Anthropic's earlier deals were already the largest ever announced for Trainium. Project Rainier (announced in 2024) committed 400,000 Trainium2 chips. The new arrangement adds another 1M-chip-class deployment at minimum, which is where the 5GW number anchors.
Why does this matter for Amazon? Trainium was designed to undercut Nvidia H100/H200/B200 pricing by 40% at equivalent throughput. But to hit that margin, AWS needs volume. A dedicated buyer of this size is the definition of volume — it amortizes the silicon R&D, justifies continuing Trainium3 and Trainium4 roadmaps, and gives AWS internal training data on real frontier workloads that no cloud competitor has.
The Mythos setup
Buried in the same announcement is a piece the press mostly skipped: Anthropic will leverage Amazon's Project Mythos data center network. Mythos is AWS's next-generation, liquid-cooled, high-power-density campus architecture — rumored to support 1GW+ per site. Anthropic is effectively the anchor tenant for Mythos.
That shifts the power relationship. AWS is no longer just selling capacity to Anthropic — it's co-designing buildings around Anthropic's training profile.
The bigger picture
To see how much leverage this actually gives, look at where the major frontier labs sit on compute footprint right now.
| Lab | Primary cloud | Announced compute commitment | Chip mix |
|---|---|---|---|
| Anthropic | AWS (primary), GCP (secondary) | 5GW dedicated + $100B spend | Trainium2/3 + some TPU, H200 |
| OpenAI | Microsoft, Oracle, SoftBank | 10GW across Stargate and MSFT | Nvidia-heavy (H200, B200, GB300) |
| Google DeepMind | Google Cloud (internal) | Not disclosed, >10GW est. | TPU v6, v7 (internal) |
| xAI | Self-owned (Colossus 1, 2) | 1.5GW live, 6GW planned | Nvidia H200, B200 |
| Meta AI | Self-owned + some AWS | 2GW+ by 2026 (Hyperion) | Nvidia B200 + MTIA |
Anthropic's position is unique. It is the only frontier lab of its size that is almost entirely dependent on a single cloud provider's custom silicon. OpenAI is spread across multiple Nvidia-based partners; Google and Meta run their own fleets; xAI owns its own buildings. Anthropic is going all-in on AWS Trainium.
Anthropic just became a derivative of AWS operational excellence. If Trainium delivers, Claude scales. If Trainium slips, Claude slips. The upside is massive — but so is the concentration risk.
Source: unsplash.com · Unsplash License
And Amazon is now Anthropic's largest shareholder group by a wide margin. With $33B in cumulative investment, AWS holds more voting and economic exposure to Claude than any other entity except Anthropic's employees and founders themselves.
So what actually changes
For Anthropic users, three things follow from this:
The first is reliability. The capacity strain that triggered the announcement — Claude slowing down under peak load, rate limits tightening on Pro and Team plans — gets a durable fix. 1GW going live by end of 2026 means Anthropic can serve inference at roughly 2–3x current peak capacity before the next growth spike.
The second is model scale. With 5GW of headroom, Anthropic is no longer compute-bound for its next frontier training run. Claude Opus 5 — rumored for late 2026 — is the first frontier model that will be trained end-to-end on Trainium silicon. If it lands well, it validates AWS's chip roadmap and reshapes how Nvidia prices H200/B200 to the other labs.
The third is enterprise pricing. Anthropic's cost-per-token on Trainium, if the 40% Nvidia delta holds, should drop meaningfully through 2027. That gives Anthropic room to either cut API prices and pressure OpenAI's margins, or keep pricing flat and widen its own gross margin ahead of a potential IPO.
For the rest of the industry, the Amazon-Anthropic deal sets a new template: dedicated silicon, dedicated buildings, dedicated power, bound by a 10-year take-or-pay contract. Expect Microsoft to respond with something similar for OpenAI on its Maia chip, and Google to tighten DeepMind's captive TPU arrangement. Anyone trying to compete at frontier scale without one of these dedicated cloud relationships — think Mistral, Cohere, Perplexity — is going to feel the squeeze first.
For a similar structural shift that happened last week in a different vertical, see how Revolut and Pragma split the stack in banking foundation models.
Further reading
출처
관련 기사

42.5 ExaFLOPS: Google's Ironwood TPU Rewrites the Inference Playbook
Google's 7th-gen TPU Ironwood hits GA with 4,614 TFLOPS per chip, 9,216-chip superpods, and Anthropic signing up for 1M TPUs. The inference era has a new king.

Anthropic Hits $30B Revenue, Signs Largest-Ever Compute Deal
Anthropic's annual revenue run rate surpassed $30B, overtaking OpenAI, while securing a 3.5GW TPU supply deal with Broadcom and Google through 2031.

The $35 Billion GPU Landlord — How CoreWeave Became AI's Infrastructure Backbone
CoreWeave landed a $21B expansion with Meta and an Anthropic deal back-to-back, bringing total contracted AI cloud revenue to $35B through 2032.
AI 트렌드를 앞서가세요
매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.