Deepfakes Went From a Reddit Hobby in 2017 to a State Weapon in 2026
A Reddit user coined the term 'deepfake' in late 2017 while using AI to swap faces in videos. By 2026, the technology is being used in state-level information warfare and social engineering attacks that are 'cheap, fast, and nearly undetectable.'
Key Takeaways
- •The term 'deepfake' was coined by a Reddit user in September 2017 using open-source face-swapping AI
- •Technology is based on variational autoencoders (2013) and GANs (2014) — academic research tools
- •First corporate deepfake fraud stole $35 million using a synthetic voice of a CEO in 2020
- •CrowdStrike 2026: deepfake-driven attacks are a top social engineering threat — 'nearly undetectable'
Root Connection
Deepfakes descend from the variational autoencoder (2013) and generative adversarial network (2014) — research tools that were never designed for face-swapping but proved devastatingly effective at it.
Timeline
Variational autoencoders published — foundational architecture for generative AI
Ian Goodfellow invents GANs — two neural networks competing to generate realistic images
Reddit user 'deepfakes' coins the term — uses open-source AI for face-swapping videos
First deepfake used in corporate fraud — $35M stolen via synthetic voice of a CEO
CrowdStrike identifies deepfake-driven attacks as top social engineering vector
State-level deepfake campaigns detected during Iran's 240-hour internet blackout
In September 2017, a Reddit user with the handle 'deepfakes' began posting videos in which celebrities' faces had been swapped onto other bodies using artificial intelligence. The technique used open-source machine learning tools — specifically autoencoders and generative adversarial networks — that had been published as academic research. The user just applied them to faces.
The videos were crude by today's standards. Faces flickered at the edges. Lighting didn't quite match. Expressions were sometimes frozen. But they were recognizable enough to be disturbing, and the technology was improving weekly.
Within months, the term 'deepfake' entered the mainstream vocabulary. Reddit banned the original community. Platforms scrambled to develop detection tools. Researchers published papers on identifying synthetic media. Governments held hearings.
None of it mattered. The technology was open-source. Anyone could use it. And it got exponentially better.
What started as one person's Reddit hobby in 2017 is now a tool of statecraft. The speed of this transition — from novelty to weapon in under a decade — is historically unprecedented.
By 2020, deepfakes had moved from entertainment to fraud. In a case that shocked the security industry, criminals used AI-generated audio — a synthetic voice that mimicked a CEO — to authorize a $35 million wire transfer. The employee who received the call said the voice was indistinguishable from the real person.
By 2025, CrowdStrike's annual threat report identified deepfake-driven attacks as a top social engineering vector. Synthetic video calls. Fake voice messages from executives. AI-generated photos for fake employee profiles. The attacks were 'cheap, fast, scalable, and extremely difficult for employees to detect.'
CrowdStrike's 2026 report calls deepfake-driven attacks 'cheap, fast, scalable, and extremely difficult for employees to detect.' The human eye has become an unreliable security tool.
In 2026, during Iran's 240-hour internet blackout — the longest in the country's history, cutting off 92 million citizens — deepfake propaganda filled the information vacuum. With no access to real news, AI-generated videos and images spread through offline channels and limited connectivity.
The trajectory from Reddit hobby to state weapon took less than nine years. This speed is historically unusual. Most technologies take decades to move from civilian novelty to military application. Radio was invented in the 1890s and wasn't weaponized for propaganda until the 1930s. Nuclear physics was theoretical in the 1930s and weaponized by 1945 — a 15-year arc that required billions of dollars and the resources of nation-states.
Deepfakes required no government funding, no Manhattan Project, no classified research. The foundational technologies — variational autoencoders (2013) and generative adversarial networks (2014, invented by Ian Goodfellow) — were published openly as academic papers. The tools were free. The compute was cheap. The barrier to entry was a laptop and an internet connection.
The security implications extend beyond fraud and propaganda. Deepfakes undermine the evidentiary value of video and audio. When any recording can be convincingly faked, no recording can be fully trusted. This is the 'liar's dividend' — the ability to dismiss real evidence as fake because fake evidence exists.
We are entering an era where seeing is no longer believing. The human eye, which evolved over millions of years to assess facial expressions and detect deception, has become an unreliable security tool. The deepfake didn't defeat our technology. It defeated our biology.
The root of the deepfake isn't Reddit or GANs. It's a fundamental asymmetry in AI capabilities: generating convincing fake content is computationally cheap, while detecting it is computationally expensive and perpetually behind. The forger always has the advantage.
How did this make you feel?
Recommended Gear
View all →Disclosure: Some links on this page may be affiliate links. If you make a purchase through these links, we may earn a small commission at no extra cost to you. We only recommend products we genuinely believe in.
YubiKey 5 NFC
Hardware security key for phishing-resistant 2FA. Works with USB-A and NFC. The gold standard in account protection.
Hacking: The Art of Exploitation
The classic hands-on guide to understanding how exploits work. Covers C, assembly, networking, and shellcode.
Faraday Bag for Phones
Signal-blocking bag that prevents tracking, remote wiping, and wireless exploits. Essential for privacy-conscious users.
Keep Reading
Want to dig deeper? Trace any technology back to its origins.
Start Research