OpenAI Was Founded to Save Humanity From AI. Now It's Worth $850 Billion.
In 2015, OpenAI was a nonprofit with a mission to ensure AI benefits all of humanity. In 2026, it's a for-profit corporation valued at $850 billion. The root of the most dramatic pivot in tech history.
Key Takeaways
- •OpenAI was founded in 2015 as a nonprofit to ensure AI safety. Elon Musk and Sam Altman co-founded it with $1B in pledges.
- •ChatGPT (Nov 2022) hit 100M users in 2 months — fastest-growing consumer app ever
- •In October 2025, OpenAI converted to a Delaware public benefit corporation. The nonprofit retains a 26% stake.
- •Microsoft owns ~27% of OpenAI (~$135B), making it the largest single shareholder
- •April 2026: valued at $852B after a $122B funding round led by SoftBank, Andreessen Horowitz, and others
Root Connection
The tension between idealism and commercialization in AI traces back to the Dartmouth Conference of 1956, where the founders of AI as a field debated whether their work should be open academic research or proprietary industry secrets. Seventy years later, OpenAI has answered that question.
Timeline
The Dartmouth Conference establishes AI as an academic field, with open research as its founding ethos
OpenAI founded as a nonprofit by Sam Altman, Elon Musk, and others, with $1B in pledges to ensure safe AGI
OpenAI creates a 'capped-profit' subsidiary, allowing outside investment for the first time
ChatGPT launches in November, reaching 100 million users in 2 months — the fastest-growing app in history
Sam Altman fired and rehired by the board in a chaotic 5-day saga that exposed governance cracks
OpenAI completes for-profit recapitalization. Nonprofit becomes OpenAI Foundation with 26% equity stake.
OpenAI raises $110B at $730B valuation in February, then $122B at $852B in April. IPO expected.
In December 2015, a group of technology leaders announced the formation of OpenAI, a nonprofit artificial intelligence research company. The founding team included Sam Altman, then president of the prestigious startup accelerator Y Combinator, and Elon Musk, who had been publicly warning about the existential risks of AI for years. They pledged over $1 billion to the effort.
The mission statement was unambiguous: "OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return."
The pitch was compelling. Google had acquired DeepMind. Facebook had hired Yann LeCun to lead its AI research lab. The world's most powerful AI was being built inside the world's largest corporations, behind closed doors, optimized for profit. OpenAI would be the counterweight — an open, nonprofit alternative that would publish its research, share its models, and ensure that artificial general intelligence, if it arrived, would belong to everyone.
That was the plan.
THE MONEY PROBLEM
The problem with building the most advanced AI in the world as a nonprofit is that building the most advanced AI in the world costs an absurd amount of money. AI research requires compute — massive quantities of GPU processing power. GPUs cost millions. Data centers cost billions. The talent to build frontier AI models commands salaries in the millions per year.
OpenAI's original charter said: 'We commit to building safe, beneficial AGI for all of humanity.' The charter didn't mention shareholders, IPOs, or $850 billion valuations. Mission statements don't survive contact with $110 billion funding rounds.
— ROOT•BYTE
OpenAI's $1 billion in pledges wasn't going to cut it.
In 2019, OpenAI created a "capped-profit" subsidiary. The structure was novel: outside investors could put money in and earn returns, but profits would be capped at 100x the original investment. The nonprofit board would retain control. The mission would remain paramount.
Microsoft invested $1 billion. Then another $10 billion. Then more. By 2025, Microsoft had invested approximately $13 billion in OpenAI, securing exclusive cloud computing rights (on Azure) and integration into its products.
The capped-profit structure was a compromise that satisfied nobody. Investors wanted more control and higher returns. Researchers worried about commercial pressure corrupting the mission. And the board — the nonprofit board that was supposed to ensure AI safety remained the priority — found itself governing a rapidly commercializing entity while lacking the expertise and authority to do so.
THE NOVEMBER CRISIS
The nonprofit that was founded to prevent AI from being monopolized by corporations has become... a corporation. With monopoly ambitions.
— ROOT•BYTE
On November 17, 2023, the OpenAI board fired Sam Altman as CEO. The official statement said he had not been "consistently candid in his communications with the board." No further explanation was given.
What followed was five days of corporate chaos that exposed every fault line in OpenAI's governance.
Microsoft, which had invested $13 billion, learned about the firing from a tweet. Over 700 of OpenAI's approximately 770 employees signed a letter threatening to resign and follow Altman to Microsoft unless the board resigned. The board, which was supposed to prioritize AI safety over commercial interests, faced a choice: hold their ground on whatever concerns led to the firing, or capitulate to commercial pressure.
They capitulated. Altman was reinstated. Most of the board members who fired him were replaced. The safety-focused board that was supposed to be the check on commercial AI development had been overridden by the commercial interests it was designed to regulate.
The lesson was clear: when a nonprofit governs a commercial entity worth billions, the commercial entity wins.
THE FOR-PROFIT CONVERSION
In October 2025, OpenAI completed what it had been moving toward for years: a full conversion to a for-profit structure.
The nonprofit was renamed the OpenAI Foundation. It retained a 26% equity stake in the new entity, OpenAI Group PBC, a Delaware public benefit corporation. Microsoft held approximately 27%. The remaining 47% went to investors and employees.
The California Attorney General approved the deal after months of scrutiny. Critics called it the largest asset transfer from a nonprofit to a for-profit entity in American history. OpenAI called it a necessary step to compete with well-funded rivals and continue pursuing its mission.
The valuation trajectory tells the story more clearly than any press release:
March 2025: $300 billion. October 2025: $500 billion (secondary market). February 2026: $730 billion ($110B round led by Amazon, SoftBank, Nvidia). April 2026: $852 billion ($122B round led by SoftBank, Andreessen Horowitz).
An IPO is expected, with analysts estimating a valuation between $500 billion and $1 trillion at listing.
THE PATTERN
OpenAI's story is not unique. It follows a pattern that has repeated throughout the history of technology.
A technology is developed with public funding and open research. Idealists create an organization to keep it open and beneficial. The technology becomes commercially valuable. Commercial interests invest, attach strings, and gradually take control. The original mission is redefined to accommodate profit. The idealists leave or are pushed out. The technology becomes the private property of the very entities it was meant to be an alternative to.
It happened with the internet (ARPANET to ISP monopolies). It happened with genomics (Human Genome Project to gene patent wars). It happened with social media (connecting people to surveillance capitalism).
And it happened with OpenAI. The nonprofit that was founded to prevent AI from being monopolized by corporations has become a corporation with monopoly ambitions, funded by the same companies it was supposed to be an alternative to.
Is this inevitable? Maybe. The forces that pull open research toward commercialization — the need for capital, the talent market, the competitive pressure — are enormous. The structural defenses that nonprofits can deploy against these forces are weak.
But inevitability doesn't mean we shouldn't name what happened. OpenAI's charter said "safe, beneficial AGI for all of humanity." Its cap table says Microsoft, SoftBank, Amazon, Nvidia, and Andreessen Horowitz.
Words are cheap. Cap tables don't lie.
Enjoy This Article?
RootByte is 100% independent - no paywalls, no corporate sponsors. Your support helps fund education, therapy for special needs kids, and keeps the research going.
Support RootByte on Ko-fiHow did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
NVIDIA Jetson Orin Nano
Compact AI computer for running local LLMs, computer vision, and robotics. 40 TOPS of AI performance.
The Innovators by Walter Isaacson
The untold story of the people who created the computer, internet, and digital revolution. Essential tech history.
Raspberry Pi 5 (8GB)
The latest Pi - powerful enough for local AI inference, home servers, and retro gaming. A tinkerer's best friend.
AI and Machine Learning for Coders
Practical guide by Laurence Moroney. Hands-on TensorFlow projects, zero heavy math. Perfect for developers entering AI.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research