spoonai
TOPRecursive Superintelligenceself-improving AIfunding

Richard Socher Just Came Out of Stealth With $650M to Build an AI That Improves Itself

Recursive Superintelligence emerged from stealth on May 13-14, raising $650M at a $4.65B valuation. GV and Greycroft led the round; Nvidia and AMD took strategic stakes. CEO Richard Socher (ex-Salesforce chief scientist) and co-founder Tim Rocktäschel (UCL professor, ex-DeepMind) are aiming at recursive self-improvement — an AI system that analyzes and rewrites itself without human-in-the-loop. Public launch targeted for mid-2026.

·10분 소요·TechCrunchTechCrunch
공유
Recursive Superintelligence — self-improving AI startup raises $650M
Source: Getty Images / TechCrunch

A Startup That Aims to Build AI That Builds Itself Just Showed Up

Here's the deal: on May 13-14, a startup called Recursive Superintelligence came out of stealth with $650M raised at a $4.65B valuation. GV (formerly Google Ventures) and Greycroft led; Nvidia and AMD took strategic positions. CEO Richard Socher (ex-Salesforce chief scientist) founded it with Tim Rocktäschel (UCL professor, ex-Google DeepMind senior researcher). Under 30 employees today. Public launch targeted for mid-2026.

The mission is loud — "build an AI system that analyzes and improves itself without direct human intervention." Commonly known as recursive self-improvement or "AI that makes AI." It's one of the scenarios AI-safety researchers worry about most. Both Anthropic and OpenAI have explicitly flagged it as risky. This company is making it the stated objective.

$4.65B for a stealth-stage company is a large number. For comparison: OpenAI's seed round in 2015 priced the company at $100M; Anthropic's Series A in 2021 was about $500M. Recursive Superintelligence is launching 5-10× higher than OpenAI/Anthropic at the same stage. That's a function of Socher's track record + the market's appetite for "self-improving AI" + the 2026 lab-valuation inflation.

The cap table is itself a signal. GV and Greycroft are conventional VCs. Nvidia and AMD coming in together as strategics — that's about (a) additional compute revenue + early seat at a next-gen lab. With Amazon firmly behind Anthropic and Microsoft behind OpenAI, Nvidia and AMD are using Recursive Superintelligence as their first major "direct-investment in an AI lab" play.

The Players — Socher, Rocktäschel, and the "Self-Improving AI" Thesis

Recursive Superintelligence. Dual HQ in London and San Francisco. Founded probably in late 2025 (exact date unverified). Under 30 staff. The company name itself is the mission — "recursive" + "superintelligence." That language is borrowed wholesale from AI safety thinkers like Yudkowsky and Bostrom, who used "recursive self-improvement leading to superintelligence."

Richard Socher (CEO). German, Stanford PhD. Key NLP figure — early deep-learning NLP work like GloVe and recursive neural networks. Founded MetaMind in 2014, acquired by Salesforce in 2016 where he became chief scientist. Left in 2020 to start you.com (AI search). Departed you.com in 2024 to start Recursive Superintelligence. you.com's future is undetermined (new CEO or wind-down possible).

Tim Rocktäschel (co-founder). German. UCL professor. Former Google DeepMind senior researcher. Known for autonomous agent research (NetHack Learning Environment and related work). Contributed to AlphaGo/AlphaStar follow-ups at DeepMind.

Investors. Lead — GV (Google Ventures), Greycroft. Strategic — Nvidia, AMD. Other (suspected) — Index Ventures, Lightspeed, Khosla Ventures. $650M is a large Series A. The $4.65B mark is in OpenAI 2018 / Anthropic 2022 territory.

"Self-improving AI" concept. A system that (a) meta-evaluates its own performance, (b) diagnoses weaknesses, (c) autonomously rewrites its code/architecture/data, (d) trains a new version and repeats from (a). In theory, fast self-improvement without human intervention. To safety researchers, this is the textbook "intelligence explosion" pathway.

Competitors. Direct "self-improving AI" competitors are still scarce. (a) OpenAI's GPT-5.5 → GPT-6 fine-tune automation (internal). (b) Anthropic's Constitutional AI self-critique loop. (c) DeepMind's AlphaProof and similar self-verification stacks. (d) Sakana AI (Japan) AI Scientist (2024). Recursive Superintelligence aims to bundle all of this under "self-improvement as the company mission."

What They're Building and Why $650M

The vision. Per Socher's May 14 interview: "Today, humans collect data, humans design models, humans fine-tune. We want to build a system where the AI does all of it." Concretely: (a) the AI analyzes its own inference outputs, (b) computes its own performance metrics, (c) identifies data weaknesses, (d) auto-generates or sources new training data, (e) proposes architecture modifications, (f) trains a new version and loops back.

Technical approach (inferred). No public whitepaper yet, but combining Socher's NLP background + Rocktäschel's RL/agent track + "recursive" in the name suggests: (a) base LLM at GPT-5.5/Claude Opus 4.7 caliber, (b) meta-learning layer for self-eval, (c) RL-driven self-improvement loop, (d) synthetic-data-generation mechanism.

Use of the $650M. A <30-employee company raising $650M is unusual. Estimated allocation: (a) compute (60-70%): $400-500M on GPUs and cloud, (b) talent (15-20%): 50-80 senior researcher hires, (c) safety infrastructure (10%): control mechanisms for self-improvement, (d) ops (5-10%). Sufficient to scale from 30 to 200+ within 12-24 months.

Why Nvidia and AMD together. Both chip leaders investing in the same startup is rare. The read: (a) Recursive Superintelligence will generate large near-term GPU demand, (b) in the Nvidia-vs-AMD race, both want compute lock-in, (c) signals that a next-gen lab can scale on external GPUs without an in-house ASIC. Unlike OpenAI (Stargate ASIC), Google (TPU), Meta (MTIA) — Recursive Superintelligence stays GPU-native.

Why announce before launching. Public launch is targeted mid-2026. The May 14 release is essentially a "pre-launch announcement." Why: (a) talent-recruiting magnet, (b) compute priority with Nvidia/AMD, (c) up-mark for the next round's valuation, (d) early dialogue with the AI safety community to address "self-improving AI" concerns proactively.

Metric Recursive Superintelligence (May 2026) OpenAI seed (2015) Anthropic Series A (2021)
Round $650M $130M (seed + Series A combined) $124M
Valuation $4.65B $0.1B $0.5B
Employees <30 ~10 ~20
Mission Self-improving AI AGI for humanity Safe AGI
Lead investors GV, Greycroft Khosla, Reid Hoffman Jaan Tallinn, Dustin Moskovitz
Key people Socher, Rocktäschel Sam Altman, Greg Brockman Dario, Daniela Amodei

What Each Side Gets

Recursive Superintelligence's wins. First, capital. $650M funds a 30-person company for 12-24 months across talent and compute simultaneously. Second, talent magnet. Socher and Rocktäschel's reputations + an ambitious mission = a strong pull for AI PhDs and senior engineers. Third, dual compute access (Nvidia + AMD). Priority GPU allocation from both. Fourth, "next-gen lab" positioning. A real shot at the #3 lab spot behind OpenAI and Anthropic.

Richard Socher personally. Estimated 20-30% equity (CEO + founder). At a $4.65B mark, his stake is worth $1-1.5B. Big step-up from you.com days in wealth, reputation, and influence. Goes from "celebrity researcher" to "celebrity CEO."

Tim Rocktäschel. Can plausibly keep his UCL professorship (sidegig model) plus founder equity. Bridges academic RL/agent leadership with industrial leadership over the next 5-10 years.

Nvidia and AMD's wins. Priority compute allocation to Recursive Superintelligence's demand pipeline. Direct revenue + next-gen-lab lock-in. For AMD especially significant — with Nvidia holding 80% share, MI300X/MI350 needs the lab footholds.

GV, Greycroft, other VCs. Direct exposure to a next-gen lab. GV has limited Anthropic exposure (Google's investment is separate), so Recursive Superintelligence is its "third-lab" play. Greycroft adds reputational capital by backing an ambitious self-improving-AI thesis.

OpenAI and Anthropic — losses and gains. Losses — a real competitor for the #3 lab slot, and a new talent magnet. Gains — if Recursive Superintelligence absorbs the brunt of "self-improving AI" safety concerns, OpenAI and Anthropic's "safer lab" positioning gets cleaner. In regulation, OpenAI/Anthropic can frame themselves as "responsible labs" relative to "ambitious labs."

AI safety community. Concerned. Self-improving AI is the top-cited risk area. Expect MIRI, Open Philanthropy, EA-aligned groups to engage actively. The company has earmarked some funding for safety infrastructure, but external verification is the hard part.

Compute infrastructure (Cerebras, SambaNova, CoreWeave). Nvidia + AMD strategic involvement means wafer-scale alternatives like Cerebras don't get the initial business. As compute demand scales, inference-side providers (Groq, Cerebras) may pick up later phases.

Historical Parallels — Wins and Losses

Win: DeepMind (founded 2010, acquired by Google 2014). Demis Hassabis, Shane Legg, Mustafa Suleyman. Initial mission: "solve intelligence." Google acquired for $600M. Delivered AlphaGo (2016), AlphaFold (2018), AlphaProof (2024). Recursive Superintelligence echoes DeepMind's early-stage ambition. The difference: DeepMind operates inside Google; Recursive is independent.

Partial win: OpenAI (founded 2015, for-profit pivot 2019). Nonprofit start; pivoted to capped-profit in 2019; GPT-3 (2020), GPT-4 (2023) put it at #1. "AGI for humanity" morphed into "AGI revenue." Recursive Superintelligence may follow — "self-improving AI" mission converting into a regular lab revenue model within 5-10 years.

Partial loss: Wave Computing (2008-2020). AI chip + AI systems company. SoftBank backed Series D. Bankrupted in 2020. Failure causes — (a) tech immaturity, (b) no revenue model, (c) the market didn't trust GPU alternatives. Recursive avoids both with multi-vendor GPU and a path to enterprise applications.

Loss: Vicarious AI (2010-2022). Brain-inspired AI company. $400M+ raised over 12 years. Acquired by Alphabet (Intrinsic) in 2022, effectively winding down. Failure causes — too early, too ambitious, no revenue model. Recursive Superintelligence benefits from 2026 tech maturity but inherits the "ambitious thesis" risk.

Competitor Counter-Plays

OpenAI. Possible counters: (a) "OpenAI Research Agents" or similar internal self-improving system, (b) reposition R&D-heavy in contrast to Anthropic, (c) sharpen Altman's "AGI mission" narrative. With revenue-acceleration as priority, capital for self-improving R&D is constrained.

Anthropic. Strengthen Constitutional AI's self-critique loop. A "Recursive Constitutional Loop" for safe self-improvement is plausible — synergistic with Anthropic's "safety-first" positioning. The $950B funding talk fully funds it.

DeepMind. AlphaProof-style verification + Gemini fine-tuning automation. Google I/O 2026 (May 19) could include a "Gemini Recursive" SKU. DeepMind already has experience with self-play in AlphaGo and AlphaStar.

Sakana AI (Japan). 2024 "AI Scientist" announcement — AI that auto-writes papers and runs peer-review. Direct conceptual comparable to Recursive Superintelligence. Smaller funding (Series A ~$20M+), but backed by Japan government + Lux Capital.

xAI Grok. Musk's "truth-seeking AI" framing. After Grok-4, a self-improvement round is plausible. Differentiates via SpaceX/Tesla proprietary data.

Chinese labs (DeepSeek, Moonshot, Qwen). DeepSeek's reasoning models (R1, R2) already do some auto-data-generation + self-improvement loops. Chinese government backing produces parallel competition.

So What Actually Changes — by Persona

AI PhDs and senior engineers. Major recruiting magnet. For "want a next-gen lab beyond OpenAI/Anthropic/DeepMind" talent, this is the strongest option. Sub-30-person entry could mean 10-100× equity step-up. Recommendation: monitor recruiting.

AI safety researchers. Concern + opportunity. "Self-improving AI" is the most cited safety risk area. Simultaneously — if Recursive funds safety infrastructure, external researchers get consulting and grant opportunities. Watch the 6-12 month response from MIRI and Anthropic.

Investors and VCs. New "third option" emerges in the lab market. Possible follow-on participation, though the $4.65B mark is already steep. LPs unable to participate directly can get indirect exposure via GV, Greycroft, Nvidia, AMD.

Enterprise IT / CIOs. No short-term impact. First commercial product likely mid-2026 to 2027. Enterprise adoption later. Stay with OpenAI/Anthropic/Google for now.

Regulators. Self-improving AI likely falls under "high-risk AI" in the EU AI Act and U.S. NIST frameworks. Regulatory engagement is likely within 6-12 months. How EU AI Office (enforcement kicks in August) classifies the company is a key variable.

End users. No direct effect. Indirect: (a) AI model improvement velocity rises, (b) AI safety discourse intensifies, (c) public conversation about AI control over the next 5-10 years accelerates.

Existing lab employees (OpenAI, Anthropic, Google). Job-hopping option expands. Recursive will aggressively recruit from these labs. Existing stock options vs. Recursive equity becomes the trade-off to evaluate.

References

관련 기사

무료 뉴스레터

AI 트렌드를 앞서가세요

매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.

매일 30개+ 소스 분석 · 한국어/영어 이중 언어광고 없음 · 1-클릭 해지