Anthropic Hits $30B Revenue, Signs Largest-Ever Compute Deal
Anthropic's annual revenue run rate surpassed $30B, overtaking OpenAI, while securing a 3.5GW TPU supply deal with Broadcom and Google through 2031.

$30 Billion. That's Anthropic's New Number.
For years, the AI world has been locked in a revenue race. OpenAI crossed the $1 billion annual revenue mark, then $10 billion, then $25 billion. Everyone watched to see who would get there first, bigger, faster. Last month, OpenAI's ARR sat at roughly $25 billion. This week, Anthropic announced its annual revenue run rate has hit $30 billion.
Let that sink in. A company founded in 2021 has surpassed one of the industry's most-watched competitors in pure revenue velocity. But this isn't just about bragging rights. The numbers behind that $30 billion tell a story about how AI's economics are reshaping, who's winning enterprise trust, and what kind of infrastructure bets matter now.
On top of that announcement came something equally significant: Anthropic inked its largest-ever compute deal. Working with Google and Broadcom, the company secured a 3.5-gigawatt TPU capacity agreement running through 2031. That's more than three times the company's current 1-gigawatt capacity. It's a signal about where Anthropic thinks the market is heading—and how confident it is in getting there.
Here's what you need to know about both moves, why they matter, and what happens next.
The Road to $30B
Hitting $30 billion ARR is remarkable for a company less than five years old. But Anthropic didn't get here on hype alone. The growth reflects real traction with the customers who actually spend serious money on AI.
Start with the enterprise segment. Eighty percent of Anthropic's revenue comes from enterprise customers. And here's the jaw-dropping part: Anthropic now serves over 1,000 enterprise customers paying at least $1 million per year. That number doubled since February. Doubled, in two months.
Compare that to the image of AI companies serving millions of individual users at $20-a-month subscriptions. That model has value, sure, but the revenue math is brutal. Enterprise deals—complex, sticky, mission-critical—are where the real money lives. Anthropic recognized this early and built its entire go-to-market strategy around it.
The company's flagship model, Claude 3.5 Sonnet, became the standard for many enterprises wrestling with security, accuracy, and controllability. Unlike some competitors who chased raw capability, Anthropic emphasized constitutional AI, interpretability, and responsible scaling. It turns out enterprises care deeply about that—especially when they're writing checks for seven or eight figures.
The growth trajectory speaks for itself. In February, Anthropic had 500 enterprise customers at $1 million-plus ARR. By April, that number crossed 1,000. The company went from $9 billion ARR at the end of 2025 to $30 billion in just over three months. That's not gradual scaling. That's explosive, hockey-stick growth.
What's driving it? A few things. First, Claude's quality and reliability have become table stakes for enterprises building AI into mission-critical systems. Second, Anthropic's safety-first philosophy resonates with risk-averse organizations in healthcare, finance, and law. Third, the company has built genuine relationships with enterprise buyers instead of chasing consumer virality.
"Enterprise customers aren't buying flashy. They're buying reliability, security, and solutions to real problems. Anthropic figured that out before most competitors."
Now, here's where it gets interesting. With $30 billion ARR, Anthropic needs compute like never before. You can't serve 1,000+ enterprise customers, millions of API calls, and cutting-edge model training with yesterday's infrastructure. That's where the Broadcom deal comes in.
What 3.5 Gigawatts Actually Means
Let's break down the numbers first, then explain what they actually mean for Anthropic's future.
As of now, Anthropic operates roughly 1 gigawatt of TPU capacity. The new deal with Broadcom and Google will increase that to 3.5 gigawatts—a 3.5x expansion. The agreement runs through 2031, which is crucial. It's not just a one-off purchase. It's a committed, multi-year supply chain locked in.
According to research from Mizuho Securities, Broadcom will pull $21 billion in AI-related revenue from Anthropic in 2026, scaling to $42 billion in 2027. These aren't Broadcom's total revenues, but specifically the revenue tied to servicing Anthropic's chip orders. For context, that's massive. It signals how serious this investment is, and how confident Broadcom—and by extension, Google—are in Anthropic's continued growth.
The stock market noticed. Broadcom's shares rose 3% on the announcement. Investors see this as validation that Google and Broadcom are betting aggressively on Anthropic as a key customer and on the TPU infrastructure play as a winner.
| Metric | Current | New Deal | Growth |
|---|---|---|---|
| TPU Capacity (GW) | 1.0 | 3.5 | 3.5x |
| Contract Duration | – | 2027–2031 | 5 years |
| Broadcom 2026 Revenue | – | $21B | From deal |
| Broadcom 2027 Revenue | – | $42B | From deal |
Why TPUs, Not GPUs
You might wonder: why TPUs instead of GPUs? Doesn't everyone use GPUs for AI?
The short answer: not anymore, at least not exclusively. TPUs—Tensor Processing Units—are Google's custom silicon designed specifically for machine learning workloads. They're deeply optimized for the kinds of matrix math that power large language models. For large-scale training and inference, TPUs can be more efficient than GPUs in many scenarios.
Anthropic has been a Google Cloud customer for years. Using TPUs makes sense on multiple levels. First, it locks in a long-term partnership with Google, which owns both the chip design and massive data center capacity. Second, TPUs are increasingly proven at scale for LLM workloads. Third, it signals commitment to the Google ecosystem, which matters for enterprise customers evaluating multicloud strategies.
The GPU market is dominated by NVIDIA, which has been the safe default for years. But by going all-in on TPUs, Anthropic is making a different bet. It's saying: we believe in Google's silicon, we believe in the efficiency gains, and we're confident enough to commit for five years.
An American AI Infrastructure Play
There's also a geopolitical angle worth noting. Broadcom is American. Google is American. TPUs are designed and manufactured in the United States. This deal, announced amid heightened scrutiny over AI compute access and export controls, signals something important: major AI leaders are investing in American-controlled infrastructure.
China, the EU, and other regions are all building their own AI capabilities. The compute wars are heating up. By locking in a massive TPU supply deal with Broadcom and Google, Anthropic is making a statement about where its infrastructure will be anchored. It's an American company betting on American chips, American partners, and American supply chains.
For government and enterprise customers worried about data sovereignty and supply chain risks, that matters.
The Bigger Picture
What do these two announcements—$30B ARR and a 3.5GW TPU deal—tell us about the state of AI in 2026?
First, the race for scale is intensifying. $30 billion ARR for a company under five years old is remarkable. It reflects explosive demand for AI products, but it also means the winner-take-most dynamics are real. Companies that fall behind in revenue growth, enterprise adoption, or compute capacity will struggle to compete.
Second, infrastructure is becoming the moat. It's not enough to have a great model anymore. You need reliable, abundant, cheap compute to train, fine-tune, and serve that model at scale. Anthropic just locked in 3.5GW through 2031. That's a competitive advantage that's hard to replicate. Competitors would need to negotiate similar deals, but supply is constrained. Google and Broadcom won't sign equally generous deals with every AI startup.
Third, enterprise wins are sticky. Eighty percent of Anthropic's revenue comes from enterprises. These are long-term contracts, often with switching costs. An enterprise that's integrated Claude into its legal review system, customer service automation, or content generation pipeline won't rip it out lightly. This customer concentration also makes revenue more predictable and defensible.
Fourth, geopolitical considerations are shaping tech strategy now. Choosing American chips, American partners, and American infrastructure isn't just a business decision. It's a signal to customers, regulators, and governments about where you stand.
| Comparison | Anthropic | OpenAI |
|---|---|---|
| 2025 ARR (End) | $9B | ~$15B |
| Current ARR | $30B | ~$25B |
| Enterprise Customers (1M+/year) | 1,000+ | Undisclosed |
| Enterprise Revenue % | 80% | Undisclosed |
| TPU/Compute Deal | 3.5GW via Broadcom–Google | Undisclosed |
What This Changes for You
If you're an enterprise building AI products, Anthropic's momentum matters. It signals stability, continued investment in model improvement, and deep pockets to fund research and infrastructure. Companies often choose vendors they believe will still exist and improve in five years. Anthropic just raised that confidence bar.
If you're an investor or analyst, this deal is a watershed moment. Anthropic is no longer an upstart betting against incumbents. It's a market leader trading blows with OpenAI at the top. The $30B ARR claim needs scrutiny—is that sustainable, or will growth plateau?—but the trajectory is undeniable.
If you're a Broadcom investor, this deal is validation that enterprise AI spending is real and massive. Broadcom and Google aren't making $42 billion bets on vapor. They're responding to actual, demonstrated demand.
If you're competing against Anthropic, you're looking at a company that's operationally firing on all cylinders. It has enterprise traction, locked-in compute, growing revenue, and confidence to commit years ahead. The window to outmaneuver them on speed or scale is narrowing.
The compute deal also signals something subtle but important: Anthropic is thinking long-term. Five years is an eternity in AI. The company is essentially saying: "We're going to be here in 2031, and we're going to need a lot of TPUs to serve our customers."
That confidence—or maybe even audacity—is a signal worth heeding.
Sources
출처
관련 기사
42.5 ExaFLOPS: Google's Ironwood TPU Rewrites the Inference Playbook

OpenAI Hits $25B Revenue, Eyes IPO — The AI Monetization Inflection Point

Nvidia GTC 2026: Vera Rubin Unveiled — Complete Technical Breakdown
AI 트렌드를 앞서가세요
매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.
