Anthropic hits $1T on secondary markets, leapfrogs OpenAI for the first time
On Forge Global and other secondary markets, Anthropic's valuation hit $1T, eclipsing OpenAI's ~$880B for the first time. The drive is revenue: ARR reportedly jumped from $9B at en
TL;DR
- On Forge Global and other secondary markets, Anthropic's valuation hit $1T, eclipsing OpenAI's ~$880B for the first time. The drive is revenue: ARR reportedly jumped from $9B at end-2025 to $30B by Ma
- Primary source: https://www.tomshardware.com/tech-industry/artificial-intelligence/anthropic-surpasses-biggest-rival-openai-in-secondary-market-valuation-surges-to-usd1-trillion-amid-frantic-investor-interest
- Importance score: 9/10
The hook
Here's the deal: On Forge Global and other secondary markets, Anthropic's valuation hit $1T, eclipsing OpenAI's ~$880B for the first time. The drive is revenue: ARR reportedly jumped from $9B at end-2025 to $30B by March, with Claude Code alone clocking $2.5B (some figures unverified). Anthropic's actual primary valuation is $380B from its February round.
Importance lands at 9/10, which puts this in the top decile of releases this quarter — the kind of announcement that still shapes product roadmaps and industry metrics six months out, not a marketing pulse that fades in a week.
Below: what happened (anchored to the primary source), the headline numbers in two tables, the timeline, what this means for individuals / teams / industry, a deep-dive section on the technical and architectural implications, skeptical takes worth keeping in mind, and what to watch in the next week.
What happened
On private secondary markets like Forge Global — venues where insiders and early investors trade existing equity — Anthropic's implied valuation crossed $1T. OpenAI sits around $880B in the same venues, meaning this is the first time Anthropic has been priced higher in that channel. The driver is revenue: ARR reportedly leapt from $9B at end-2025 to $30B by March 2026, a ~3.3× expansion. Claude Code alone supposedly contributes $2.5B ARR (figures unconfirmed by official disclosure). Anthropic's actual primary-round valuation is $380B from its February raise, so the trillion is sentiment, not paper. Still, the demand asymmetry — bids for OpenAI shares reportedly soft in the same window — is a meaningful signal.
Primary source is the announcement page itself — see Tom's Hardware. Secondary corroboration is listed at the bottom; we have 3 secondary outlets cross-referenced. Any quoted figure in this piece is linked inline to its origin. Treat unlinked claims as my own framing, not as facts from the source.
On the headline numbers: Anthropic 세컨더리 평가액 lands at $1T (vs OpenAI 세컨더리 ~$880B). Anthropic ARR (Mar 2026) lands at $30B (vs End-2025 $9B (3.3×)). Claude Code 단일 ARR lands at $2.5B (미확인) (vs OpenAI Codex 미공개). Anthropic 1차 평가 (Feb 2026) lands at $380B (vs OpenAI 1차 추정 $300B+). These metrics are not all measured under identical methodology — pick whichever matters to your workload and verify against the primary source's appendix where possible.
Recent timeline: 2025-12-31 (Anthropic ARR $9B), 2026-02 ($380B 1차 라운드 클로즈), 2026-03 (ARR $30B 보도), 2026-04-23 (세컨더리 평가액 $1T 돌파). Read this not as a one-off but as the latest knot on a multi-month thread. The compression between the last two events is itself a signal that release cadence in this category is tightening.
Benchmarks / Key Numbers
| Metric | Value | Versus |
|---|---|---|
| Anthropic 세컨더리 평가액 | $1T | OpenAI 세컨더리 ~$880B |
| Anthropic ARR (Mar 2026) | $30B | End-2025 $9B (3.3×) |
| Claude Code 단일 ARR | $2.5B (미확인) | OpenAI Codex 미공개 |
| Anthropic 1차 평가 (Feb 2026) | $380B | OpenAI 1차 추정 $300B+ |
Timeline
| Date | Event |
|---|---|
| 2025-12-31 | Anthropic ARR $9B |
| 2026-02 | $380B 1차 라운드 클로즈 |
| 2026-03 | ARR $30B 보도 |
| 2026-04-23 | 세컨더리 평가액 $1T 돌파 |
Why it matters
Three lenses help here. First, the individual user lens: does this materially change a workflow you spend more than thirty minutes a day on (coding, writing, analysis, automation)? If yes, the second question is whether the same output now becomes faster, cheaper, or more reliable — separate the three to keep the adoption decision clean.
Second, the team / enterprise lens. POC teams should ask whether this shortens the path to validation. Production teams should isolate which variable shifts: unit cost, latency, or accuracy. Marketing claims and SLA reality routinely diverge in the days after a release; running your own benchmark on roughly thirty representative inputs is the only safe move.
Third, the competitive lens. Is the gap structural or temporary? Data advantages erode in 6–12 months. Infrastructure advantages persist for 12–24 months. Team-composition advantages are nearly impossible to replicate. This piece tries to attribute the headline figure to one of these where the evidence allows.
Fourth, the regulatory / ecosystem lens — easy to miss but compounding. Releases of this scale typically attract policy guidance or industry standards within a quarter or two, particularly around safety, data governance, and copyright. If you're not under pressure to decide today, watching one more cycle of those discussions can save you from re-platforming later.
Deep Dive
This section goes one layer deeper into technical detail. Light readers can skip; teams making procurement or research-direction decisions should not.
The most striking number is Anthropic 세컨더리 평가액 at $1T, set against OpenAI 세컨더리 ~$880B. To know whether that's meaningful, you need apples-to-apples comparison with the prior generation under identical measurement methodology — and most release notes don't publish that methodology in full. Even on identical benchmarks, prompt format, few-shot count, and temperature settings routinely shift results by 5–15 percentage points; that's the noise floor any external reproduction has to clear.
Architecturally, three deltas are most likely to explain the jump. First, training data composition: at fixed parameter count, better curation alone produces meaningful gains in code and math domains. Second, post-training pipeline strength: most of the headline improvements over the last 18 months trace to here, not to base-model architecture. Third, inference-time tool-call frequency: part of why models look smarter is simply that they reach for search or computation more aggressively. The exact split is unstated, but post-training is the most-likely dominant lever.
Limitations worth keeping front of mind. Self-reported benchmarks, thin adversarial data, sparse out-of-distribution generalization studies. Pricing marked 'preview' or 'limited access' historically gets revised at least once within six months. Operational quotas — context window, tool-call frequency caps — also tend to tighten quietly post-launch as usage scales. Build any 12-month ROI model with sensitivity to all three.
Open problems remain. Multi-step agent cost blow-up. Long-horizon memory consistency. Graceful degradation when tool calls fail. Responsibility allocation when an autonomous system makes a costly call (especially in code, finance, or healthcare). None of these are sufficiently addressed in this release. Production deployments that ignore them will receive an expensive bill from operations roughly six months in.
Who can use this
Solo developers and small teams. Hand off well-scoped backlog tickets, reclaim review and architecture time. The trap: low-quality specs lead the model into hallucinated work-arounds, and net time can go up. Spend the first month explicitly mapping spec quality vs. output quality on your team's actual tickets.
Startups. Prototype-to-feedback loops shrink from a week to a day for many feature classes. Especially valuable for data ingestion, simple ML pipelines, and internal tooling — areas where humans can move into review-only mode. Production-grade code, license review, and security audit still need a human in the loop.
Cost-sensitive enterprises. If this shifts the price-performance curve, workloads of meaningful scale (call centers, document processing, search) can run 30–50% cheaper for equivalent quality. Compounded over a quarter, that's real OPEX impact.
On-prem / governance-sensitive teams. Open-weight options open paths for finance, healthcare, and public sector workloads that have stalled on cloud-LLM adoption. The data-sovereignty story changes when you can serve the same quality on hardware you control.
Researchers and students. Releases at this importance level reset 6–12 months of research agenda. If your topic is adjacent, design follow-up experiments now — replications and extensions of fresh frontier results have an unusually short impact half-life.
Skeptical takes
Three reservations come up repeatedly. Read them alongside the body, not after.
Self-reported benchmarks lack methodology disclosure. Over half of major releases in the last six months saw external reproductions land below the headline numbers. Treat the announced figures as the upper bound; measure your own workloads independently before committing.
Demo-fit examples don't survive long-tail real workloads. Keywords like 'agentic', 'human-level', or 'frontier' tend to hold inside curated demo scenarios and degrade 30–50% in production with domain-specific vocabulary, non-standard inputs, or multilingual mixes. A two-week pilot on your own representative inputs is non-optional.
Post-launch pricing and quotas tighten. Within the same category, prices have routinely been revised upward and operational caps narrowed within months of launch. Any 12-month ROI estimate should include a 20–30% price-increase scenario in the sensitivity analysis. 'Preview' models do not carry production SLAs — don't pin business-critical workloads to them.
What to watch next week
Four signals to track. (1) Competitive responses or pricing moves in the same category — a response within a week signals strong market pressure. (2) Independent reproductions from academic or third-party benchmarkers — within ±5 points of the headline is 'as advertised'; beyond that, caution. (3) Long-tail user feedback in established communities (Reddit, HN, X) — the gap between marketing tone and on-the-ground tone shows up within a week. (4) Ecosystem integration announcements — when major IDEs or platforms merge integration PRs within a week, this release is becoming the industry default. Alignment across all four points to a structural shift; divergence points to a marketing cycle.
One-line takeaway
On Forge Global and other secondary markets, Anthropic's valuation hit $1T, eclipsing OpenAI's ~$880B for the first time. The drive is revenue: ARR reportedly j
Sources
- [Primary] Anthropic surpasses biggest rival OpenAI in secondary market valuation — surges to $1 trillion
- [Yahoo Finance] Anthropic just overtook OpenAI with $1 trillion valuation
- [Nerd Level Tech] Anthropic $30B ARR — How Claude Overtook OpenAI in Revenue
- [Quartz] Anthropic reaches $1 trillion valuation on secondary markets
관련 기사

Anthropic Just Toppled OpenAI in Revenue – Here's What That Actually Means
Anthropic hit $30B annual revenue run rate, surpassing OpenAI's $24–25B for the first time. In 15 months from $1B to $30B, the enterprise AI race has a new leader.

Anthropic's Week: $30B ARR and the Birth of a Platform Company
Anthropic hit $30B ARR this week, overtaking OpenAI for the first time, signed a multi-gigawatt Broadcom–Google TPU deal, and launched Claude Managed Agents. The transformation from safety lab to platform company, compressed into one week.

Anthropic Launches Claude Marketplace — The First Real Enterprise AI App Store
Anthropic opened a B2B marketplace where enterprises can buy Claude-powered third-party apps using existing budgets. Zero commission, six launch partners, and a platform play that could reshape enterprise AI procurement.
AI 트렌드를 앞서가세요
매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.