The AI Company That Said No to the Pentagon — Anthropic, Claude, and the Roots of AI Safety
In 2021, two siblings walked away from the most powerful AI lab on Earth to build one that prioritized safety. Five years later, their company is worth $380 billion — and suing the Pentagon.
The Real Problem
AI capabilities were racing ahead of safety research — the people building the most powerful systems weren't prioritizing understanding them
IMPACT: Built a $380B company on the thesis that safe AI is more valuable than fast AI
The Unsung Heroes
Daniela Amodei
President & Co-Founder
Former VP of Safety & Policy at OpenAI — brought the governance and policy framework that shaped Anthropic's institutional identity
Chris Olah
Co-Founder & Interpretability Lead
Pioneered mechanistic interpretability — the science of understanding what's actually happening inside neural networks
Jared Kaplan
Co-Founder & Chief Science Officer
Co-authored the neural scaling laws paper that predicted how AI capabilities would grow with compute — the theoretical foundation for Anthropic's strategy
Key Takeaways
- •Founded in 2021 by ex-OpenAI researchers who believed safety was being deprioritized
- •Constitutional AI — teaching machines to evaluate their own outputs against explicit principles
- •From $10M revenue in 2022 to ~$20B annualized in March 2026 (220x growth)
- •Sued the Pentagon after refusing to allow Claude for mass surveillance or autonomous weapons
Root Connection
Anthropic's roots trace back to a fundamental disagreement at OpenAI about whether AI safety should be a priority or an afterthought — a split that may define the future of AI.
Anthropic Valuation Growth
From $550M to $380B in under five years — a 690x increase
Source: CNBC, Yahoo Finance, HiringHello
Timeline
OpenAI founded as a non-profit AI research lab
Dario and Daniela Amodei, along with six colleagues, decide to leave OpenAI over safety concerns
Anthropic founded as a Public Benefit Corporation — raises $124M Series A
Constitutional AI paper published — Claude's foundational training method. $580M Series B (FTX)
Claude 1 and 2 released. Google invests $2B, Amazon commits $4B. Responsible Scaling Policy published
Claude 3 family launches (Haiku, Sonnet, Opus). Claude 3.5 Sonnet outperforms Opus on benchmarks
Claude Code launches. Anthropic hits $9B annualized revenue. Valued at $183B
Claude 4.6 with 1M context window. $380B valuation. Pentagon dispute. Anthropic sues the federal government
In December 2020, during the height of the COVID-19 pandemic, a group of senior researchers at OpenAI held meetings entirely on Zoom. Not about the next model. About whether they should leave.
Dario Amodei was OpenAI's VP of Research. His sister Daniela was VP of Safety & Policy. They were among the most senior people at the lab. And they had a growing conviction that OpenAI was moving too fast on capabilities and too slow on safety.
They left. They took six colleagues with them — Tom Brown (co-author of the GPT-3 paper), Jack Clark (Policy Director), Sam McCandlish, Jared Kaplan (co-author of the neural scaling laws paper that predicted how much bigger models would get), Chris Olah (the interpretability pioneer), and Benjamin Mann.
Two siblings sat in their apartments during a pandemic, on Zoom, talking about whether the most powerful AI lab in the world was moving too fast. They decided it was. They left.
In 2021, they founded Anthropic. Not as a competitor to OpenAI. As a correction.
THE ROOT: A DISAGREEMENT ABOUT SPEED
The root of Anthropic goes deeper than a corporate split. It goes to a philosophical question that the entire AI industry is still fighting over: should you build as fast as you can and figure out safety later, or should safety be baked in from the start?
Dario Amodei has described two beliefs that drove the founding. First, that pouring more compute into models would make them dramatically better — "with almost no end." Second, that this scaling would eventually produce systems powerful enough to be genuinely dangerous, and that safety work needed to happen before that point, not after.
The implication: OpenAI wasn't doing enough.
Anthropic was incorporated as a Public Benefit Corporation — a legal structure that obligates the company to balance profit with public benefit. This wasn't marketing. It was architecture.
A safety company that loosens its safety commitments when pressured is also just... a company.
(Sources: Wikipedia — Anthropic, TIME 100 AI: Daniela and Dario Amodei)
CONSTITUTIONAL AI: TEACHING A MACHINE ITS OWN PRINCIPLES
In December 2022, Anthropic published the paper that would define its approach: Constitutional AI.
Standing on principle turned out to be good marketing. Whether it was also good business remains to be seen.
The idea was elegant. Instead of training an AI purely through human feedback (where humans rate outputs as good or bad), you give the AI a set of explicit principles — a constitution — and let it evaluate its own outputs against those principles.
The constitution draws from the Universal Declaration of Human Rights, safety guidelines, and common-sense ethical principles. The AI reads its own response, critiques it against the constitution, and revises it. Then a reinforcement learning step trains the model to prefer the revised versions.
Why does this matter? Because traditional reinforcement learning from human feedback (RLHF) is opaque. You can't easily audit why a model refuses one request but answers another. Constitutional AI makes the rules explicit. Auditable. Debatable.
(Source: Anthropic — Constitutional AI, arXiv:2212.08073)
CLAUDE: FROM RESEARCH MODEL TO INDUSTRY CONTENDER
Claude 1 launched in March 2023 as a limited-access research model. By July 2023, Claude 2 was available to the public with a 100K context window — at the time, the largest in the industry.
Then the acceleration began:
November 2023: Claude 2.1 with a 200,000-token context window — roughly 500 pages of text.
March 2024: Claude 3 family (Haiku, Sonnet, Opus) — three tiers optimized for speed, balance, and maximum capability.
June 2024: Claude 3.5 Sonnet — which outperformed the larger Claude 3 Opus on benchmarks, proving that safety-focused training didn't mean sacrificing performance.
February 2025: Claude Code preview — an AI that lives in your terminal and writes code alongside you.
May 2025: Claude Sonnet 4 and Opus 4 — Claude Code goes generally available.
February 2026: Claude Opus 4.6 and Sonnet 4.6 — 1 million token context window, Agent Teams capabilities.
Claude Code alone reached $1 billion in annualized revenue within six months of its general availability launch.
(Sources: Wikipedia — Claude, Anthropic News)
THE MONEY: FROM $124M TO $380B
Anthropic's funding history reads like a tech fairy tale — with one dark chapter.
May 2021: $124M Series A, led by Jaan Tallinn (Skype co-founder) and Dustin Moskovitz (Facebook co-founder). Valuation: $550M.
April 2022: $580M Series B. The lead investor? Sam Bankman-Fried and FTX. Seven months later, FTX collapsed in one of the largest financial frauds in history. Anthropic retained the funds but the association lingered.
2023-2024: Google invested roughly $3B. Amazon committed up to $8B. Anthropic became the primary AI partner for both major cloud platforms — AWS Bedrock and Google Cloud.
March 2025: $3.5B Series E at a $61.5B valuation.
September 2025: $13B Series F at a $183B valuation.
February 2026: $30B Series G at a $380B valuation, led by GIC and Coatue.
From $550M to $380B in under five years. That's a 690x increase in valuation.
(Sources: CNBC, Yahoo Finance, HiringHello)
REVENUE: CATCHING UP — FAST
Anthropic's revenue trajectory is extraordinary. From roughly $10M in 2022 to approximately $19-20B annualized run rate in March 2026. That's roughly 220x growth in under four years.
Analysis from Epoch AI suggests Anthropic could surpass OpenAI in annualized revenue by mid-2026 if current trends continue. Anthropic's revenue has been growing at approximately 10x per year since reaching $1B, compared to OpenAI's 3.4x per year.
The company has over 300,000 business customers globally and has grown its enterprise market share from 24% to 40%.
Anthropic is expected to break even by 2028 — two years before OpenAI, which projects $74B in losses in 2028 with profitability not expected until 2030.
(Sources: Bloomberg, Epoch AI, Electroiq)
WHERE ANTHROPIC STANDS IN THE AI WAR
The AI industry in 2026 is a five-way race. Here's an honest assessment:
OpenAI — First-mover advantage. ChatGPT is the household name. Largest user base. But revenue growth rate is slower than Anthropic's, and profitability is years away.
Google DeepMind — Massive compute advantage. Gemini models integrated across Google's ecosystem. Also an investor in Anthropic — playing both sides of the race.
Meta AI — Open-source strategy with Llama models. Different game entirely — giving away models to capture the developer ecosystem.
xAI (Elon Musk) — Grok models, X/Twitter integration. Politically connected under the current administration.
Anthropic — Safety-first brand. Strongest enterprise adoption growth. Claude Code dominance in developer tooling. Model Context Protocol (MCP) emerging as an industry integration standard.
Anthropic's key differentiator has always been the safety-first identity. But in early 2026, that identity got tested — twice.
THE PENTAGON DISPUTE: DRAWING A LINE
In February 2026, Anthropic refused Pentagon contract language that would have permitted the use of Claude for "all lawful purposes." Anthropic insisted on two red lines: no mass domestic surveillance and no fully autonomous weapons.
Defense Secretary Pete Hegseth called Anthropic's guardrails "corporate virtue-signaling."
The Pentagon designated Anthropic a "supply chain risk, effective immediately" — a move that would force defense contractors to sever ties with the company by June 30, 2026.
On March 9, 2026, Anthropic sued the federal government.
The silver lining: Anthropic reported over 1 million new sign-ups per day following the controversy. Claude briefly became the top AI app in 20+ countries on Apple's App Store.
(Sources: CNN, Boston Herald, TechPolicy.Press)
THE SAFETY PLEDGE — AND ITS EROSION
Here's where balance demands honesty about both sides.
Anthropic's Responsible Scaling Policy (RSP), published in September 2023, was groundbreaking. It introduced AI Safety Level Standards — inspired by biosafety levels. The key commitment: Anthropic would never train a more capable model unless safety measures were already proven adequate. Safety researchers had authority to halt or delay launches.
In February 2026, Anthropic revised this policy. Version 3.0 dropped the hard commitment to halt development if safety couldn't be assured. Instead, Anthropic would "match or surpass" competitors' safety efforts, and would only delay development if it considered itself the AI leader AND believed catastrophe risks were significant.
Anthropic's rationale: a "collective action problem." If Anthropic paused while competitors with weaker protections continued, the least safe developers would set the pace.
Critics — including TIME, CNN, and Engadget — characterized this as Anthropic abandoning its core differentiator under competitive and political pressure.
Both sides have a point. Unilateral restraint in an arms race can mean the most cautious player loses influence over outcomes. But a safety company that loosens its safety commitments when pressured is also just... a company.
(Sources: TIME, CNN, Engadget, Anthropic RSP)
DID YOU KNOW?
Anthropic's founding team met entirely on Zoom during COVID — they incorporated a company without ever being in the same room.
The word "anthropic" comes from the Greek anthropikos, meaning "relating to human beings."
Claude is named after Claude Shannon, the father of information theory, who proved that all information can be reduced to binary digits.
Anthropic's Constitutional AI was partly inspired by the Universal Declaration of Human Rights.
Claude's 1 million token context window (4.6) can process roughly 2,500 pages of text in a single prompt.
THE ROOT
Every piece of technology has a root. Anthropic's root is a disagreement.
Two siblings sat in their respective apartments during a pandemic, on Zoom calls with colleagues, talking about whether the most powerful AI lab in the world was moving too fast. They decided it was. They left. They built something different.
Five years later, their company is worth $380 billion, is suing the US government, and is engaged in the most consequential technology race in human history.
Whether Anthropic can hold its safety-first identity under the pressure of competition, politics, and profit is an open question. The collective action problem is real. The political pressure is real. The financial incentives to move fast and worry about safety later are enormous.
But so is the original conviction that got two siblings to walk away from the top of the AI industry and start over.
The root of Anthropic isn't technology. It's a bet — that building AI safely is not just morally right, but ultimately more valuable than building it fast.
That bet hasn't been settled yet.
How did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
NVIDIA Jetson Orin Nano
Compact AI computer for running local LLMs, computer vision, and robotics. 40 TOPS of AI performance.
The Innovators by Walter Isaacson
The untold story of the people who created the computer, internet, and digital revolution. Essential tech history.
Raspberry Pi 5 (8GB)
The latest Pi — powerful enough for local AI inference, home servers, and retro gaming. A tinkerer's best friend.
AI and Machine Learning for Coders
Practical guide by Laurence Moroney. Hands-on TensorFlow projects, zero heavy math. Perfect for developers entering AI.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research