In 1966, a 200-Line Program Fooled People into Thinking a Computer Understood Them. We're Still Falling for It.
Joseph Weizenbaum's ELIZA used simple pattern matching to mimic a therapist. It had no understanding, no memory, no intelligence. His own secretary asked him to leave the room so she could talk to it privately.
Key Takeaways
- •ELIZA was written in MAD-SLIP by Joseph Weizenbaum at MIT in 1966 — roughly 200 lines of code
- •Used only pattern matching and keyword substitution — zero actual comprehension
- •Weizenbaum's own secretary asked for privacy to speak with the program
- •Weizenbaum coined 'the ELIZA effect' — humans projecting understanding onto machines that have none
Root Connection
Weizenbaum named the program after Eliza Doolittle in Shaw's Pygmalion — a character taught to speak 'properly' without understanding why. He chose the name as a warning. Most people missed the irony.
Chatbot Complexity: Lines of Code Over Time
GPT-4's parameter count makes line counts meaningless — it has ~1.8 trillion parameters
Source: RootByte research / published estimates
Timeline
Alan Turing publishes 'Computing Machinery and Intelligence' — proposes the Turing Test
Joseph Weizenbaum creates ELIZA at MIT — 200 lines of code mimic a Rogerian therapist
Kenneth Colby creates PARRY — a chatbot simulating a paranoid schizophrenic
Richard Wallace creates ALICE using AIML — wins Loebner Prize three times
Apple launches Siri — voice assistants enter the mainstream
OpenAI releases ChatGPT — 100 million users in two months
In 1966, at MIT's Artificial Intelligence Laboratory, Joseph Weizenbaum wrote a program called ELIZA. It was about 200 lines of code in a language called MAD-SLIP. It had no neural networks, no training data, no machine learning. It used pattern matching — find a keyword in the user's sentence, slot it into a template response, spit it back.
User: 'I'm feeling sad today.' ELIZA: 'Why do you think you're feeling sad?' User: 'My mother doesn't understand me.' ELIZA: 'Tell me more about your mother.'
That was the entire trick. ELIZA's most famous script, DOCTOR, mimicked a Rogerian psychotherapist — a style of therapy where the therapist reflects the patient's words back as questions. It was the perfect disguise for a program that understood nothing. Rogerian therapy already sounds like a machine repeating what you say.
The response was immediate and disturbing. Weizenbaum's secretary — a woman who knew he had programmed ELIZA, who had watched him write the code — asked him to leave the room so she could have a private conversation with it. Psychiatrists at MIT suggested ELIZA could be used for real therapy. The program had no memory between sentences. It forgot everything the moment you stopped typing.
Weizenbaum's secretary asked him to leave the room so she could speak privately with ELIZA. The program had no memory, no understanding, and no idea she was there.
Weizenbaum was horrified. He had built ELIZA as a demonstration of how shallow natural language processing was — how easy it was to create the illusion of understanding without any actual understanding. Instead, people treated it as proof that machines could think.
He named the program after Eliza Doolittle from George Bernard Shaw's Pygmalion — a flower girl taught to speak 'properly' without comprehending the social world she was imitating. The name was a deliberate warning about mistaking performance for comprehension. Almost everyone missed the irony.
Weizenbaum spent the rest of his career warning about the dangers of attributing understanding to machines. His 1976 book, Computer Power and Human Reason, argued that certain human roles — therapists, judges, teachers — should never be delegated to computers, regardless of how convincing they seem. The book was largely ignored by the AI community.
He named it after Eliza Doolittle — a character taught to mimic without understanding. The irony was the point. Nobody got it.
He coined a term for what he'd witnessed: 'the ELIZA effect.' It describes the human tendency to project genuine understanding onto systems that are simply following patterns. In 1966, the pattern was keyword matching. In 2026, the pattern is statistical prediction across trillions of tokens. The ELIZA effect hasn't weakened. If anything, it's stronger.
When ChatGPT launched in November 2022 and gained 100 million users in two months, the conversations were eerily familiar. People confided in it. They asked it for life advice. They described it as 'understanding' them. The technology was unrecognizably more sophisticated than ELIZA. The human response was identical.
Weizenbaum died in 2008, in Berlin. He never saw ChatGPT. But he predicted it — or rather, he predicted us. The problem was never whether machines could understand. The problem was always that we so desperately want them to.
How did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
Framework Laptop 16
The modular, repairable laptop that lets you upgrade every component. The right-to-repair movement in action.
Flipper Zero
Multi-tool for pentesters and hardware hackers. RFID, NFC, infrared, GPIO — all in your pocket.
The Innovators by Walter Isaacson
The untold story of the people who created the computer, internet, and digital revolution. Essential tech history.
reMarkable 2 Paper Tablet
E-ink tablet that feels like writing on real paper. No distractions, no notifications — just thinking.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research