spoonai
TOPFundingAMI LabsJEPA

Yann LeCun's AMI Labs Raises $1.03B Seed — The Biggest Contrarian Bet Against LLMs

AI pioneer Yann LeCun's AMI Labs raised Europe's largest-ever seed round at $1.03B. JEPA architecture, world models, investors, and competitive landscape.

·5분 소요·TechCrunch
AMI Labs founder Yann LeCun
Photo: AMI Labs

$1.03B. In a Single Seed Round. Europe's Largest Ever.

On March 10, Yann LeCun — Turing Award winner, CNN inventor, and one of the three "godfathers of deep learning" — announced that his startup AMI Labs (Advanced Machine Intelligence Labs) raised $1.03B in a seed round. Pre-money valuation: $3.5B. The company is just four months old.

This is the largest seed round in European history and among the largest globally. Investors include Jeff Bezos's Bezos Expeditions alongside major Silicon Valley and European VCs. France 24 reported that the French government welcomed the investment as a symbol of European AI sovereignty.

But the real story isn't the money. AMI Labs isn't building another LLM. It's building world models — a fundamentally different approach to AI that directly challenges the paradigm pursued by OpenAI, Anthropic, and Google.

Who Is Yann LeCun, and Why Does He Oppose LLMs?

LeCun won the 2018 Turing Award (computer science's Nobel Prize) for his work on convolutional neural networks. He's been Meta's Chief AI Scientist since 2013, leading FAIR (Facebook AI Research). For years, he's publicly criticized the fundamental limitations of LLMs:

LeCun's Critique Explanation
"Token prediction isn't understanding" LLMs predict the next word probabilistically — they don't understand the world
"Autoregressive = error accumulation" Generating tokens one-by-one compounds early errors
"Text alone can't teach world knowledge" Humans learn through vision, touch, and movement
"Can't reach AGI this way" LLM scaling alone won't achieve general intelligence

This isn't just academic criticism. LeCun argues the entire LLM scaling paradigm is a dead end. Now he's putting $1.03B behind his alternative.

JEPA: The Alternative Architecture

AMI Labs' core technology is **JEPA **(Joint Embedding Predictive Architecture), developed at Meta FAIR:

Aspect LLMs (GPT, Claude) JEPA (AMI Labs)
Learning Next-token prediction Prediction in abstract representation space
Input Text (+ images/audio) Video, images, sensor data
Goal Text generation Understanding physical world dynamics
Architecture Transformer (decoder) Joint Embedding + energy-based model
Error mode Hallucination Can express uncertainty
Prediction target Exact next token Distribution of possible future states

What Are World Models?

A world model is AI that understands how the physical world works. An LLM can answer "what happens when you throw a ball?" with text: "it falls." A world model can actually simulate the ball's trajectory given its mass, angle, and gravity — not through text, but through physical understanding.

This matters enormously for robotics, autonomous driving, and any application where AI must interact with the real world. A robot picking up an unfamiliar object needs physical intuition, not text knowledge.

The JEPA Technical Core

JEPA uses two encoders: a Context Encoder (converts current observations to abstract representations) and a Target Encoder (converts future states to abstract representations). A Predictor then predicts Target representations from Context representations.

The key insight: prediction happens at the abstract representation level, not at the pixel level. Predicting exact future images is unnecessarily hard (you don't need to predict the exact position of every raindrop). Abstract-level prediction captures what matters while ignoring irrelevant details.

JEPA's energy-based approach also structurally addresses hallucination. Unlike autoregressive models that must always produce a "most likely" token, energy-based models can naturally express "I don't know" through high energy values. This is critical for safety-critical applications like autonomous driving.

As LeCun told SiliconANGLE: "Text is a very impoverished representation of the world." Most information about the world is visual and physical, not textual.

Competitive Landscape

Company Approach Strength Limitation
AMI Labs JEPA, energy-based Strong theory, LeCun's vision Not yet proven at scale
Google DeepMind (Genie) Video generation Massive compute + data Generative model limits
OpenAI (Sora → World Sim) Video gen to simulation Funding, engineering Tied to LLM paradigm
Meta FAIR (V-JEPA) Original JEPA research Academic depth Slow commercialization
Nvidia (Cosmos) Physics simulation GPU ecosystem, Omniverse Narrow scope

Notably, LeCun maintains his Meta Chief AI Scientist role while running AMI Labs — a structure that could channel FAIR's research into commercial application.

Why It Matters

AMI Labs' $1.03B seed is the largest "contrarian bet" in AI history. The industry is pouring hundreds of billions into "make LLMs bigger." LeCun says the direction itself is wrong, and he's putting $1B behind proving his alternative.

If he succeeds, the AI paradigm shifts. If he fails, it strengthens the case that LLM scaling is the only path. Either way, this experiment will produce a critical data point for AI's future direction.

$1.03B seed, $3.5B valuation, Turing Award winner's direct venture. This isn't just a startup story — it's a formal declaration of war in the AI paradigm battle.

The fact that LeCun is taking a decade of Meta FAIR research and turning it into a company signals that world model technology is ready to be tested at industrial scale. For developers, this means keeping an eye on alternative architectures beyond transformers — the next breakthrough may not come from making GPT bigger, but from rethinking AI from first principles. Whether AMI Labs succeeds or not, March 2026 will be remembered as the moment the world model era officially began.

References

관련 기사

무료 뉴스레터

AI 트렌드를 앞서가세요

매일 아침, 엄선된 AI 뉴스를 받아보세요. 스팸 없음. 언제든 구독 취소.