Nexthop AI Raises $500M at $4.2B Valuation — Building the Networking Layer for AI Data Centers
Former Arista COO's Nexthop AI raises $500M Series B. Three new AI-optimized switches, disaggregated spine architecture, and the AI networking market explained.

GPUs Are Useless Without the Network Connecting Them
Training an AI model requires thousands of GPUs working in concert. But stacking GPUs means nothing if they can't talk to each other fast enough. In distributed training, the speed of gradient exchange between GPUs determines the overall training speed. The physical infrastructure connecting those GPUs — data center networking — is quietly becoming the biggest bottleneck in AI.
On March 10, Nexthop AI — founded in 2024 by former Arista Networks COO Anshul Sadana — closed a $500M Series B at a $4.2B valuation. Lightspeed Venture Partners led the round, with a16z and Altimeter Capital participating. The round was heavily oversubscribed, with more than 2x the target amount in investor interest according to Yahoo Finance. Total funding to date stands at approximately $750M.
Reaching unicorn status just two years after founding tells you everything about how hot the AI infrastructure market is right now.
Why Data Center Networking Matters Now
Traditional data center switches from Arista, Cisco, and Juniper were designed for predictable traffic patterns — web serving, database queries, CDN distribution. AI workloads are a completely different beast. Here's why:
| AI Workload | Traffic Pattern | Networking Requirement |
|---|---|---|
| Distributed Training (All-Reduce) | Every GPU talks to every GPU simultaneously | Ultra-low latency, uniform bisection bandwidth |
| MoE Inference | Dynamic routing to unpredictable experts | Adaptive traffic management, microsecond rerouting |
| KV Cache Sharing | Memory-speed bandwidth for large context windows | Sustained 100+ GB/s throughput |
| Checkpointing | Periodic multi-TB bursts to storage | Burst bandwidth, congestion control |
Nexthop AI builds switches designed from the ground up for these AI-specific patterns, rather than retrofitting general-purpose networking hardware.
Three New Switches — Detailed Specs
Alongside the funding announcement, Nexthop unveiled three new network switches:
| Model | Throughput | Port Config | Purpose | Chipset |
|---|---|---|---|---|
| NH-4010 | 51.2 Tbps | 400G x 128 or 800G x 64 | AI cluster leaf/spine | Broadcom Memory AI |
| NH-4220 | 102.4 Tbps | 800G x 128 | Large-scale AI fabric backbone | Broadcom Tomahawk 6 |
| NH-5010 | Modular | Module-based expansion | Multi-datacenter interconnect | Broadcom (Disaggregated) |
The NH-4010 is the building block for individual AI clusters (256–1,024 GPUs), running a custom OS that recognizes all-reduce traffic patterns and dynamically balances loads. The NH-4220 doubles that to 102.4 Tbps — among the highest bandwidth Ethernet switches commercially available — targeting 10,000+ GPU clusters needed for training frontier models.
But the star of the show is the NH-5010 and its disaggregated spine architecture. Instead of a monolithic switch box, it splits into two functional tiers: an internal tier for intra-datacenter traffic and an external tier for inter-datacenter packet orchestration. The key benefit is scalability: when expanding from one data center to four, you add external tier modules instead of replacing the entire network. For hyperscalers spending billions on AI infrastructure, this means massive cost savings.
Market Context: The Numbers
The Big Four (Alphabet, Amazon, Meta, Microsoft) are expected to spend roughly $650B on AI data center infrastructure in 2026. Networking typically accounts for 15–20% of total build cost. The AI data center networking market alone is projected to reach $100B by 2031.
| Metric | Value | Note |
|---|---|---|
| Big Four AI infra spend (2026) | ~$650B | Combined capex |
| AI DC networking market (2031) | ~$100B | 25%+ CAGR |
| Nexthop valuation | $4.2B | Series B |
| Total funding raised | ~$750M | Series A + B |
| Time to unicorn | ~2 years | Founded 2024 |
a16z stated on X: "AI is forcing a complete rebuild of data center infrastructure." The consensus across Silicon Valley is that networking is the next major bottleneck after GPUs.
Competitive Landscape
The AI data center networking space has four major fronts:
Nvidia NVLink/NVSwitch — The de facto standard for GPU-to-GPU interconnect. NVLink 5.0 (Vera Rubin generation) delivers 1.8 TB/s per GPU. But it's Nvidia-only and rack-scale only (up to 72 GPUs per DGX). Connecting the full data center still requires Ethernet, which is exactly where Nexthop operates.
Arista Networks — Sadana's former employer and the cloud data center networking market leader (–$120B market cap, $7B+ annual revenue). Arista has its own AI switches (7060X6 series) but is transitioning slowly from general-purpose to AI-optimized. Nexthop is essentially exploiting the gap Sadana knows intimately.
Cisco/Juniper — Legacy networking giants with massive enterprise install bases. Cisco (–$240B market cap) and Juniper (now part of HPE) dominate traditional enterprise networking but are trailing in AI-specific products.
Ultra Ethernet Consortium — An industry group including AMD, Intel, Meta, and Microsoft working on next-gen Ethernet standards optimized for AI. Nexthop's approach aligns with UEC's direction.
| Competitor | Strength | AI Weakness |
|---|---|---|
| Nvidia NVLink | Highest GPU bandwidth | Nvidia-only, rack-scale limit |
| Arista | Cloud networking leader | Slow AI-specific pivot |
| Cisco/Juniper | Global enterprise base | Late to AI market |
| Nexthop AI | AI-native design, fast execution | No custom ASIC, Broadcom dependent |
The Broadcom Dependency
All three switches use Broadcom chips (Tomahawk, Memory AI series). This means Nexthop isn't building custom ASICs — which has clear tradeoffs. The upside is speed: ASIC development takes 2–3 years, while using commodity chips let Nexthop ship three products within two years of founding. The downside is that competitors can use the same chips. Nexthop's differentiation must come from software (custom OS, traffic management algorithms) and system architecture (disaggregated spine). Whether they eventually develop custom silicon will determine their long-term competitive moat.
Why It Matters
Nexthop AI's $500M raise signals that the AI infrastructure stack is being rebuilt from scratch — not just at the compute layer (GPUs), but at the networking layer too. If GPUs are the engine, the network is the highway. Even the most powerful engine can't perform on a two-lane road.
Two years to $4.2B unicorn. $500M oversubscribed round. Lightspeed, a16z, and Altimeter on the cap table. The market is betting that the next AI bottleneck is the network — and Nexthop is building the solution.
References
관련 기사
AI 트렌드를 앞서가세요
매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.



