spoonai
TOPSamsungHBM4Nvidia

Samsung's HBM4 finally cleared both Nvidia and AMD qualification — first crack in SK hynix's 70% lock

Korean reports in early May say Samsung's HBM4 has cleared final qualification at both Nvidia and AMD, with mass shipments starting in June. UBS estimates SK hynix will hold ~70% of the HBM4 market for Nvidia's next-gen 'Rubin' platform. Samsung's entry threatens to compress that lead and reshapes the Korean memory duo's HBM super-cycle distribution.

·8분 소요·Global EconomicGlobal Economic
공유
Samsung HBM4 — Nvidia and AMD qualification cleared, mass shipments June
Source: Wikimedia Commons (Samsung Headquarters)

The first real crack in SK hynix's HBM lock

Korean media in early May reported that Samsung's HBM4 has cleared final qualification at both Nvidia and AMD, with mass shipments starting in June. Why this matters: HBM has been an SK hynix monopoly for 18 months. Samsung lost ground in HBM3 and HBM3E to qualification issues and missed out on most of the Hopper / Blackwell mass-production cycle. HBM4 is Samsung's "back to the AI memory market" milestone — and crucially, it's slotted into Nvidia's next-gen "Rubin" platform at parity sizing.

Even if UBS is right that SK hynix keeps ~70% of Rubin HBM4, the question that matters for the next 18–24 months is who gets the other 30%. Before Samsung cleared qualification, Micron (US) was the assumed beneficiary. After Samsung's dual-clear in May, Micron's expected share gets contested. The market structure shifts from "SK hynix + Micron" to a real three-way race: SK hynix + Samsung + Micron.

One more layer — Samsung's qualification matters beyond memory revenue. Samsung Foundry, Samsung LSI (system semis), and Samsung Display all use "AI memory share" as a credibility signal for adjacent businesses. Samsung HBM4 recovery resets the market's read on Samsung's overall semis competitiveness in the AI era.

The cast — Samsung, SK hynix, Nvidia, AMD, Micron

Samsung Electronics (memory division). One of Korea's two memory giants. ~40% global DRAM/NAND share but lagged SK hynix in HBM for 18 months. 2024–2025 quality issues on HBM3E 12-Hi stacking pushed Samsung off the Nvidia H200/B200 main-supplier list. HBM4 is Samsung's first "lessons learned" generation — 12-Hi stack + 1c-nm DRAM + redesigned TSV packaging.

SK hynix. HBM market leader (~50% overall, ~70% of next-gen HBM4 per UBS). Q1 2026 operating profit of 37.6 trillion KRW (a record). Nvidia's primary memory partner; the de facto "AI memory" standard supplier. Counter-moves to Samsung's entry: (1) accelerate HBM5 R&D, (2) margin defense, (3) expand beyond Nvidia/AMD to OEM customers.

Nvidia. Hopper (H100/H200) → Blackwell (B100/B200) → Rubin (mass production H2 2026). Each Rubin GPU uses 8–12 HBM4 stacks, roughly 2x the per-GPU memory revenue vs. H200. Diversifying memory suppliers across SK + Samsung + Micron strengthens both supply security and cost negotiation.

AMD. MI355X (2025) → MI400 series (H2 2026), HBM4 inside. Smaller share than Nvidia but a meaningful Samsung tailwind. Memory dual-sourcing by AMD strengthens both pricing leverage and supply assurance. Likely commentary from Lisa Su on diversification at the May 13 earnings call.

Micron. US memory big-3. Accelerating HBM3E and HBM4 production. Q1 2026 record revenue ~$9B. Process and capacity still trail the Korean duo. Samsung's qualification undermines Micron's "non-SK 30% share" trajectory — biggest swing factor for Micron's next four quarters.

TSMC. Doesn't make memory but is the critical packaging partner via CoWoS-L. Whether Samsung HBM4 plays nicely with TSMC packaging is one of the largest ramp risks. As of May, compatibility looks confirmed; June ship start is on schedule.

What's inside — qualification clear, June ramp, Rubin revenue split

What "qualification clear" means. Big chip customers run 100+ tests before adopting memory: electrical (DDR/voltage/IO margins), reliability (24/7 + power and thermal cycling), packaging compatibility (CoWoS, EMIB), production yield. Clearing qualification is itself the result of 12–18 months of work. Clearing both Nvidia and AMD simultaneously means a single design passed multiple gauntlets — efficient execution.

June ramp. Mass shipments in June align with Nvidia's Rubin H2 ramp. Samsung HBM4 in initial Rubin SKUs is now effectively decided. Missing this window pushes Samsung to the 2027 Rubin successor.

Rubin revenue split. Per UBS, SK hynix ~70%, Samsung + Micron split the other 30%. Pre-Samsung-clear assumption was Micron ~25% / open ~5%. Post-clear, the most likely H2 stable split is roughly SK 65% / Samsung 20% / Micron 15%. SK keeps a price premium, but its ability to push more pricing is now capped.

Item Detail
Report date Early May 2026
Qualifications cleared Nvidia + AMD
Mass shipment start 2026-06
HBM4 spec 12-Hi stack, 1c-nm DRAM
Rubin revenue split (estimated) SK 65–70% / Samsung 15–20% / Micron 10–15%
Samsung memory revenue impact +20–30% over 12 months
SK hynix pricing power Compresses
Micron guidance impact Negative

Samsung memory recovery sizing. 2025 Samsung memory revenue ~100T KRW (HBM ~10%). HBM4 ramp could bring HBM share to 20% in 2026, 30% in 2027 — adding 20–30T KRW absolute. Memory share of Samsung's ~350T KRW total expands from 30% to 35%.

Who wins, who loses

Samsung. First, AI memory comeback — the "HBM laggard" tag finally drops. Second, Lee Jae-yong leadership validation — HBM weakness was the most-cited 2024 critique of his tenure; HBM4 recovery directly answers it. Third, knock-on credibility for Samsung Foundry — AI memory → AI systems → foundry wins is the chain. Foundry still trails TSMC, but HBM4 is the first recovery signal.

SK hynix. Wins the "still #1" framing — keeps 65–70% share. Loses pricing leverage and single-supplier premium. Q1 2026 record profit may have marked peak pricing. But absolute revenue continues to grow given total Rubin demand.

Nvidia. Pricing leverage. Memory is ~40% of the Rubin BOM, so a 1% memory price cut = ~0.4pp system margin. Plus supply security — instant alternative if any one supplier hiccups. Tradeoff: more vendor management overhead.

AMD. Same dual benefit. MI400 ramps alongside Rubin, so memory disruption was a top operational risk. Samsung's entry reduces it.

Micron. Direct loser. Likely sell-side guidance cuts in the next 4 quarters. Watch for Citi/Morgan Stanley downgrades around May 13. Long-term, US-only capacity + CHIPS Act subsidies preserve a US-government / defense differentiator.

TSMC. Packaging revenue stability. CoWoS-L was always supply-constrained; with Korean duo HBM4 ramping, TSMC packaging revenue moves with it. Likely +30–40% packaging segment guide in H2 2026.

Korean government. Sovereign-AI asset expansion. Owning two of the three global AI-memory leaders strengthens Korea's position in US/China geopolitics.

Past patterns — what worked, what didn't

Worked: SK hynix's HBM3 first-mover (2023–2024). SK hynix took the lead at HBM3 with H100, then dominated for 18 months. Drove record profits in 2024–2025. Classic memory-market first-mover advantage. Samsung's HBM4 catch-up is the counter case, 18 months late.

Worked: Samsung memory's 1980s–1990s Japan overtake. Samsung went from trailing NEC, Toshiba, Fujitsu in the 80s to #1 in DRAM by the 90s. Capacity + process + R&D triangle. The same pattern is repeating in HBM.

Failed: Intel's exit from memory (1985). Lost to Japan, exited memory, refocused on CPUs. The CPU bet underwrote 30 years of growth, but memory was a permanent loss. If Samsung had missed HBM4 again, the memory franchise itself could have been at risk. This recovery sidesteps that scenario.

Failed: Micron's 2010s catchup attempts. Micron acquired Elpida (Japan), captured Taiwan OEMs, but stayed behind the Korean duo on cost and process. The next five years for Micron depend on whether US capacity + government subsidies become a real differentiator or whether the process gap to Korea closes.

Counter-plays

SK hynix. Accelerate HBM5 R&D — 16-Hi stacks, next-gen 1c-nm DRAM, integrated optical interconnect. 2027 production target. Expand beyond Nvidia/AMD into hyperscaler in-house silicon (AWS Trainium, Google TPU) memory supply.

Micron. Use CHIPS Act subsidies to accelerate US capacity (Boise, Idaho + Syracuse, NY fabs online 2027). Differentiate on US government/defense/financial markets where "America-first" matters. Push HBM4 12-Hi stack production to close the cost gap.

Samsung. Start HBM5 R&D in parallel with SK and Micron. Pitch foundry + memory bundle deals to hyperscalers — a single supplier for both AI accelerator and HBM is a real differentiator for AWS, Google, Microsoft custom silicon.

Nvidia internal memory IP? If pricing pressure rises, Nvidia could invest in proprietary memory IP — but fab capex makes this impractical. More likely: control more of the memory controller / HBM stack packaging IP to dilute memory vendor pricing power.

China (CXMT, Yangtze Memory). Sanctions block leading-edge process and EUV access. HBM4 production entry is effectively impossible. China builds a parallel domestic stack with its own AI accelerators (Huawei Ascend, Cambricon) and 2-3-generation-behind memory. Globally, the Korea + Micron three-way race holds.

So what changes — by persona

Korean semis investors. Direct catalyst for Samsung Electronics. Stock at ~80,000 KRW, ~480T KRW market cap. HBM4 recovery plausibly opens +20–30% upside over 12 months. SK hynix faces some multiple compression on pricing pressure but absolute revenue keeps growing. Micron is short-term sell pressure. TSMC gains packaging tailwind.

ML / AI infra engineers. Rubin ramp speeds H100/H200 price decline. Used H200 prices likely -30–40% over the next 6–12 months. Cloud GPU instance rates also gradually decline as Rubin supply stabilizes.

Enterprise IT decision-makers. AI infra capex calculus tilts. Buy H200 now for immediate training; wait for Rubin for 2027 production runs. Memory supply stability improves Rubin supply visibility.

Founders / startups. AI training cost likely -20–30% over 12–18 months. Schedule shifts can capture capex savings. The "build vs. API" breakeven shifts further toward "build."

Regulators. Korean memory duo's expanded global share is an "economic security" asset. MOTIE and MSIT will prioritize Korean semis R&D subsidies. Korea's role as the swing supplier between the US and China strengthens.

Consumers. Indirect effect: Samsung Galaxy and AI home appliances increasingly use Samsung's own silicon and memory, lifting per-device AI performance.

References

관련 기사

무료 뉴스레터

AI 트렌드를 앞서가세요

매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.

매일 30개+ 소스 분석 · 한국어/영어 이중 언어광고 없음 · 1-클릭 해지