Agent-Oriented Programming Was a Failed Academic Idea in 1993 — Now AI Is Making It the Future of Code
In 1993, Yoav Shoham proposed that programs should be built from autonomous agents with beliefs, desires, and intentions. Nobody cared. Thirty years later, we're building exactly that — and calling it the AI revolution.
Key Takeaways
- •Yoav Shoham coined 'Agent-Oriented Programming' in a 1993 Stanford paper
- •AOOP builds on the BDI model: agents have Beliefs (knowledge), Desires (goals), and Intentions (plans)
- •The paradigm died in academia by 2010 — too complex, no practical tooling
- •Modern AI agents (Claude Code, GPT-4 agents, AutoGPT) are accidentally implementing Shoham's 30-year-old vision
- •AOOP differs from OOP: objects are passive (called by others), agents are autonomous (decide when to act)
Root Connection
Agent-Oriented Programming traces from Yoav Shoham's 1993 paper straight back to the BDI model of Bratman (1987) — a philosopher's theory about how humans make rational decisions, now powering AI agents that book flights and write code.
AI Agent Frameworks Released Per Year
The agent framework explosion mirrors Shoham's 1993 vision — 30 years late
Source: GitHub Trending + Papers With Code
Timeline
Philosopher Michael Bratman publishes the BDI model: Beliefs, Desires, Intentions — a framework for rational human agency
Yoav Shoham (Stanford) publishes 'Agent-Oriented Programming' — proposes building software from autonomous agents instead of objects
JADE framework released — the first widely-used agent development platform (Java)
Jason framework implements BDI agents in AgentSpeak — closest realization of Shoham's vision
Interest in AOOP fades — too complex, too academic, no killer use case. Multi-agent systems become a niche research area
ChatGPT + AutoGPT explosion — AI agents go mainstream. The 'agent' concept returns, turbocharged by LLMs
Claude, GPT, Gemini all offer agent modes. Anthropic ships Agent SDK. Microsoft builds multi-agent frameworks. Shoham's vision arrives — 33 years late
In 1993, a Stanford computer scientist named Yoav Shoham published a paper that almost nobody read.
It was called 'Agent-Oriented Programming.' The thesis was simple but radical: object-oriented programming, for all its success, had a fundamental limitation. Objects are passive. They sit there waiting for someone to call their methods. They don't perceive, they don't reason, they don't decide. They're tools, not actors.
Shoham proposed a different paradigm. Instead of objects, build software from agents — autonomous entities that have beliefs about the world, desires for outcomes, and intentions to act. An agent doesn't wait to be called. It observes its environment, reasons about what to do, and acts on its own.
The idea wasn't born in a vacuum.
In 1987, philosopher Michael Bratman had published a theory of human rational agency called the BDI model — Beliefs, Desires, Intentions. Bratman argued that humans don't just react to stimuli. We maintain beliefs about how the world works, we have desires about how we want it to be, and we form intentions — plans of action — to bridge the gap. This three-part architecture, Bratman argued, is what makes human decision-making rational.
Shoham's 1993 paper described agents as entities with beliefs about the world, desires for outcomes, and intentions to act. Replace 'beliefs' with 'context window,' 'desires' with 'system prompt,' and 'intentions' with 'tool calls' — and you've described every modern AI agent.
Shoham saw BDI and thought: what if software worked the same way?
In his AOOP model, an agent maintains a set of beliefs (its current understanding of the world). It has desires (goals it wants to achieve). And it forms intentions (concrete plans to execute). Unlike objects, which are defined by what they are, agents are defined by what they want and how they pursue it.
The difference from OOP is subtle but fundamental.
In OOP, you might have a ThermostatObject with a method called adjustTemperature(). Something else calls that method. The thermostat does what it's told.
OOP said: model the world as objects. AOOP says: model the world as agents — autonomous entities that perceive, reason, and act. The difference? Objects wait to be called. Agents decide to act.
In AOOP, you'd have a ThermostatAgent that believes the room is 78°F, desires the room to be 72°F, and forms the intention to turn on the AC. Nobody tells it to act. It perceives the gap between its beliefs and desires and acts autonomously.
Shoham built a prototype language called AGENT-0. It could represent mental states, send messages between agents, and handle commitment rules. It was elegant. It was theoretically beautiful.
And it went absolutely nowhere.
The academic community tried. JADE (Java Agent DEvelopment framework) launched in 1999, giving developers a platform for building multi-agent systems in Java. Jason, released in 2003, implemented BDI agents in a language called AgentSpeak — the closest anyone got to Shoham's original vision. JACK, another framework, was used in some military and logistics applications.
But the problem was practical. Building agents was hard. The BDI model required developers to think about mental states, commitment strategies, and inter-agent communication protocols. OOP was conceptually simpler: here's a class, here are its methods, call them. Agents required a fundamentally different way of thinking about software, and most developers didn't see the payoff.
By 2010, Agent-Oriented Programming was a footnote in graduate-level AI textbooks. Multi-agent systems remained a niche research area. The paradigm had failed to find its killer use case.
Then, in 2023, everything changed.
ChatGPT had launched in late 2022. By early 2023, developers started building 'AI agents' — systems that could perceive their environment (through text, code, APIs), reason about what to do (through large language models), and act autonomously (through tool calling and code execution).
AutoGPT went viral in April 2023. It was a system that gave GPT-4 a goal, a set of tools, and the ability to act in a loop — perceive, reason, act, repeat. It was crude. It was unreliable. And it was, functionally, a BDI agent.
Suddenly Shoham's 30-year-old paper was relevant again.
Look at any modern AI agent framework — Anthropic's Claude Code, OpenAI's Assistants API, LangChain's agent loops, Microsoft's AutoGen — and you'll see the BDI architecture wearing a new suit.
Beliefs? That's the agent's context window — everything it knows about the current situation, including conversation history, file contents, and tool results.
Desires? That's the system prompt and user goal — what the agent is trying to achieve.
Intentions? That's the agent's plan of tool calls — the sequence of actions it decides to take to achieve the goal.
The mapping is almost exact. Shoham described agents that maintain mental states, communicate with other agents, and reason about commitments. Modern AI agents maintain conversation state, communicate with APIs and other agents, and reason about which tools to call.
The difference is that Shoham had to build the reasoning from scratch using logic programming. Today, the reasoning comes free — it's the LLM itself. A large language model is, in a real sense, a general-purpose reasoning engine that can power any agent architecture.
This is why AOOP is experiencing a resurrection.
Multi-agent systems — once a theoretical curiosity — are now a production reality. Anthropic's Agent SDK lets you build agents that spawn sub-agents. Microsoft's AutoGen creates teams of specialized agents that collaborate on complex tasks. CrewAI, Swarm, and dozens of other frameworks orchestrate multiple AI agents working together.
The vision Shoham articulated in 1993 — software built from autonomous, reasoning entities that cooperate to solve problems — is now being built at massive scale. The paradigm didn't fail. It was just 30 years too early.
There's a deeper lesson here.
OOP succeeded because it mapped to how humans naturally think about the world — as collections of things with properties and behaviors. AOOP is succeeding now because AI agents map to how humans naturally think about actors — as entities that perceive, reason, and decide.
Objects model the world. Agents model the actors in the world. The shift from OOP to AOOP isn't a revolution — it's an evolution. We went from modeling things to modeling beings.
Yoav Shoham is still at Stanford. He went on to co-found AI companies (including AI21 Labs) and to create the AI Index Report, one of the most cited annual assessments of AI progress.
He doesn't often get credited as a prophet of the agent revolution. But in 1993, he described software agents with beliefs, desires, and intentions — autonomous entities that perceive their environment and act on their own.
Thirty-three years later, millions of developers are building exactly that. They just call it 'AI agents' instead of 'Agent-Oriented Programming.'
The root of every AI agent on Earth isn't a neural network. It's a Stanford paper from 1993, a philosopher's model from 1987, and a question as old as computing: what happens when software stops waiting to be told what to do — and starts deciding for itself?
How did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
Framework Laptop 16
The modular, repairable laptop that lets you upgrade every component. The right-to-repair movement in action.
Flipper Zero
Multi-tool for pentesters and hardware hackers. RFID, NFC, infrared, GPIO — all in your pocket.
The Innovators by Walter Isaacson
The untold story of the people who created the computer, internet, and digital revolution. Essential tech history.
reMarkable 2 Paper Tablet
E-ink tablet that feels like writing on real paper. No distractions, no notifications — just thinking.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research