Yann LeCun Left Meta to Build a Brain — The Theory Behind It Came From a 1943 Book
In late 2025, Yann LeCun left Meta to found AMI Labs, raising over $1 billion — Europe's largest-ever seed round. His bet: LLMs are a dead end. The real AI breakthrough will come from 'world models' — a concept first described by a Scottish psychologist in 1943.
Key Takeaways
- •Yann LeCun left Meta in late 2025 after 12 years as Chief AI Scientist
- •AMI Labs raised over $1 billion — Europe's largest-ever AI seed round
- •LeCun argues LLMs are a dead end — real AI needs 'world models' that understand physics
- •The 'world model' concept traces to Kenneth Craik's 1943 book about internal mental models
Root Connection
AMI Labs' 'world models' approach traces directly to Kenneth Craik's 1943 book 'The Nature of Explanation' — the argument that intelligence requires internal models of physical reality, not just pattern matching.
Timeline
Kenneth Craik publishes 'The Nature of Explanation' — argues brains build models of reality
Symbolic AI vs connectionist debate intensifies — two visions of machine intelligence
LeCun publishes LeNet-5 — convolutional neural networks that read handwritten digits
LeCun joins Facebook/Meta as Chief AI Scientist
LeCun leaves Meta — argues LLMs are fundamentally limited
AMI Labs raises $1B+ seed round — NVIDIA, Temasek, Jeff Bezos among backers
In 1943, Scottish psychologist Kenneth Craik published a slim volume called 'The Nature of Explanation.' Its central argument was deceptively simple: the human brain builds small-scale models of external reality and uses them to predict events. We don't just react to the world. We simulate it internally. We run mental experiments before taking physical actions.
Craik died in a cycling accident in 1945 at age 31. His book was largely forgotten outside cognitive science.
Eighty-two years later, Yann LeCun — one of the three 'godfathers of deep learning,' a Turing Award winner, and Meta's Chief AI Scientist for over a decade — left Meta to build exactly what Craik described. His new company, AMI Labs, raised over $1 billion in early 2026, making it the largest seed round in European AI history. Backers include NVIDIA, Temasek, and capital linked to Jeff Bezos.
LeCun's thesis is a direct challenge to the AI industry's dominant paradigm. Large Language Models — GPT, Claude, Gemini, Llama — are fundamentally limited, he argues, because they only predict the next token in a sequence of text. They don't understand physics. They can't predict what happens when you push a cup off a table. They have no model of the world.
LeCun's argument is stark: LLMs only predict the next word. They don't understand physics, causality, or the physical world. They are 'autoregressive parlor tricks,' he says. Building real intelligence requires world models.
LeCun calls LLMs 'autoregressive parlor tricks.' Impressive, yes. But not intelligence. Not even close.
AMI Labs is building what LeCun calls a 'world model' — an AI system that learns to predict the physical world the way a baby does. Babies don't learn language first. They learn physics. They learn that objects fall, that things hidden behind other things still exist, that actions have consequences. This intuitive physics comes before language, before reasoning, before everything else.
This is exactly Craik's argument from 1943. Intelligence is not pattern matching. It's not statistical prediction. It's building an internal model of reality that's accurate enough to simulate the future.
This is the most significant philosophical schism in AI since the symbolic-vs-connectionist debate of the 1980s. If LeCun is right, the entire LLM gold rush is building on sand.
The implications are enormous. If LeCun is right, the current LLM gold rush — hundreds of billions of dollars invested in scaling language models — is building on a foundation that can never achieve true intelligence. You can scale GPT to a trillion parameters, and it still won't understand that water flows downhill.
This is the most significant philosophical schism in AI since the 1980s, when the field split between symbolic AI (rules and logic) and connectionism (neural networks and learning). The connectionists won that debate. LeCun was one of them — his 1998 paper on convolutional neural networks (LeNet-5) proved that neural networks could read handwritten digits, launching the deep learning revolution.
Now LeCun is saying that his own revolution doesn't go far enough. Neural networks that predict text are a stepping stone, not a destination. The destination is machines that understand the world.
The AI community is divided. OpenAI and Anthropic believe that scaling language models, combined with reinforcement learning and tool use, will eventually produce general intelligence. LeCun believes this is like trying to reach the moon by climbing a very tall tree. You're going up, but you're on the wrong vehicle.
A billion dollars says LeCun is right. Or at least that his investors think he might be.
Kenneth Craik never saw a computer. He died two years before the first electronic general-purpose computer (ENIAC, 1945) was completed. But his argument — that intelligence requires an internal model of physical reality — may turn out to be the most important idea in AI, 82 years after he wrote it down.
How did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
Framework Laptop 16
The modular, repairable laptop that lets you upgrade every component. The right-to-repair movement in action.
Flipper Zero
Multi-tool for pentesters and hardware hackers. RFID, NFC, infrared, GPIO — all in your pocket.
The Innovators by Walter Isaacson
The untold story of the people who created the computer, internet, and digital revolution. Essential tech history.
reMarkable 2 Paper Tablet
E-ink tablet that feels like writing on real paper. No distractions, no notifications — just thinking.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research