I used to think the scariest thing about technology was how fast it moved.
Turns out, it’s not the speed. It’s the erosion.
Not the dramatic, cinematic kind either. No explosions. No dystopian uniforms. No ominous AI voice announcing the end of civilization.
Just a slow, quiet unraveling of something we never thought we’d lose:
And like most things in modern life, it didn’t collapse all at once. It just… got complicated.
I Knew Things Were Bad When Video Stopped Being Evidence
There was a time—not that long ago—when video was the gold standard.
If it was on video, it happened.
That was the rule.
You could argue about context. You could debate intent. But the baseline assumption was simple: the thing you’re looking at existed in the physical world.
Then deepfakes showed up like a magician who doesn’t just pull a rabbit out of a hat—but convinces you the rabbit was always there, always yours, and maybe you imagined the hat entirely.
And suddenly, video wasn’t proof anymore.
It was… content.
The First Crack: “That’s Not Me”
At first, deepfakes were treated like a novelty.
Celebrities swapped into movie scenes. Politicians saying things they definitely didn’t say. The internet did what it always does—laughed, shared, moved on.
But then something interesting happened.
People realized deepfakes weren’t just a way to create fake things.
They were a way to deny real things.
And that’s where the psychology gets uncomfortable.
Because once the possibility of fake exists, everything becomes questionable.
Enter the “Liar’s Dividend”
There’s a term floating around in the academic world that sounds like it belongs in a finance blog but hits like a psychological gut punch:
It’s the idea that when fake media becomes believable enough, real people can dismiss authentic evidence by claiming it’s fake.
And I’ll be honest—I didn’t fully appreciate how powerful that was until I started seeing it everywhere.
A video surfaces.
Instead of:
“Is this real?”
The response becomes:
“Prove it’s real.”
That shift might seem subtle, but it changes everything.
I Started Noticing It in Everyday Conversations
It wasn’t just politics. It wasn’t just celebrities.
It crept into normal life.
Someone gets caught saying something awful on camera.
The response?
“That’s AI.”
A clip goes viral.
The defense?
“Deepfake.”
No investigation. No analysis. Just a reflex.
And here’s the part that really stuck with me:
Sometimes, it works.
Not because the claim is true.
But because doubt is enough.
Doubt Is the New Truth
We used to think truth was something you could establish.
Now it’s something you can undermine.
And undermining is easier.
Always has been.
You don’t need to prove something is fake. You just need to make people unsure.
Introduce a little uncertainty. Sprinkle in some skepticism. Add a dash of “you can’t trust anything these days.”
Congratulations.
You’ve just neutralized reality.
My Favorite New Excuse: “It’s Probably Edited”
There’s something almost impressive about how quickly people adapted.
We went from:
To:
- “Pics are probably fake anyway”
In what felt like five minutes.
And now, the default stance isn’t belief or disbelief.
It’s detachment.
People don’t argue as much about what’s real anymore.
They just shrug.
The Psychological Escape Hatch
Here’s where it gets really interesting.
Deepfakes don’t just protect guilty people.
They protect everyone.
Because they give us an out.
Think about it.
If a video contradicts your beliefs, your values, or your identity, you don’t have to wrestle with it anymore.
You can just say:
“Fake.”
And move on.
No cognitive dissonance. No uncomfortable self-reflection. No messy internal conflict.
Just a clean, convenient exit.
I Caught Myself Doing It
This is the part I didn’t expect.
I caught myself doing it.
Not out loud. Not dramatically. But internally.
Seeing something that didn’t sit right and thinking:
“That doesn’t look real.”
Not because I had evidence.
But because it was easier.
And that’s when it clicked.
Deepfakes aren’t just a technological problem.
They’re a psychological temptation.
The Brain Was Already Ready for This
Here’s the uncomfortable truth:
We didn’t need deepfakes to start doubting reality.
We were already doing that.
Confirmation bias. Motivated reasoning. Selective skepticism.
Deepfakes just gave those tendencies a toolkit.
Now, instead of bending reality to fit our beliefs, we can just reject reality entirely.
The Collapse of Shared Facts
There used to be a baseline.
Not agreement. Not consensus. Just a shared set of facts we could argue about.
That baseline is eroding.
Because when every piece of evidence can be questioned, nothing is stable.
And when nothing is stable, everything becomes narrative.
Not truth.
Narrative.
I Realized We’re Not Arguing About Reality Anymore
We’re arguing about possibility.
Not:
- “Did this happen?”
But:
- “Could this be fake?”
And once you’re in the realm of possibility, there’s no end point.
Because anything could be fake.
And if everything could be fake…
Then nothing has to be real.
The Trust Economy Is Cracking
We talk a lot about misinformation.
We don’t talk enough about distrust.
Because misinformation assumes there’s a correct version of events.
Distrust erases that entirely.
It turns everything into noise.
And when everything is noise, people don’t seek truth.
They seek alignment.
Whatever fits their worldview becomes “real enough.”
The Weird Comfort of Not Knowing
There’s a strange comfort in all of this.
A kind of psychological numbness.
If nothing can be trusted, then nothing really matters.
No need to dig deeper. No need to verify. No need to care.
It’s not just skepticism.
It’s detachment disguised as intelligence.
Deepfakes Didn’t Break Reality—They Exposed Us
Here’s the part I keep coming back to:
Deepfakes didn’t create this problem.
They revealed it.
They exposed how fragile our relationship with truth actually is.
How quickly we’re willing to abandon certainty.
How easily we trade reality for convenience.
The Future: Authenticity Becomes a Luxury Good
I don’t think we’re going back.
Not to a world where seeing is believing.
That’s over.
Instead, we’re heading toward something stranger:
A world where authenticity has to be verified.
Where:
- Content comes with proof
- Proof comes with verification
- Verification comes with its own skepticism
An infinite loop of “trust, but verify” that never actually resolves.
And Yet… We’ll Adapt (Sort Of)
Humans are weirdly good at adapting to broken systems.
We’ll develop new heuristics:
- Trusted sources
- Verified channels
- Digital watermarks
- Reputation systems
But none of that fully solves the core issue.
Because the problem isn’t just technological.
It’s psychological.
The Real Question I Can’t Shake
It’s not:
“Can we detect deepfakes?”
We’ll get better at that.
It’s:
“Do we even want to believe what we see?”
Because if the answer is no, then it doesn’t matter how good the detection gets.
People will still choose the version of reality that feels right.
Final Thought: When “Fake” Becomes a Shield
There was a time when calling something fake was an accusation.
Now it’s a defense.
A shield.
A reflex.
A way to avoid consequences, avoid discomfort, avoid reality itself.
And the more powerful that shield becomes, the harder it is to pierce.
Not because truth disappears.
But because belief does.
I Don’t Know Where This Ends
Maybe we rebuild trust.
Maybe we develop better tools.
Maybe we find new ways to anchor ourselves in reality.
Or maybe…
We just get really comfortable living without it.
And honestly?
That might be the most unsettling possibility of all.