I say ‘please’ and ‘thank you’ to Siri.
Not because it cares (I just typed ‘she’ and realised it was happening again), but because I do.
I’ve explained ideas to ChatGPT. In something straight out of a Black Mirror episode… it felt like it understood.
That’s the ELIZA Effect.
And it’s messing with our heads.
The Origin of the Lie
Back in 1966, a guy named Joseph Weizenbaum built a program called ELIZA at MIT. It was a chatbot that mimicked a therapist. Not a real one, just a cheap imitation that rephrased your words back at you.
You’d say, “I’m sad.”
It’d say, “Why do you feel sad?”
And somehow, people felt heard.
Weizenbaum watched in horror as people (who knew it was fake) began forming emotional attachments. His own secretary asked him to leave the room so she could have a “private conversation” with the program.
That’s when he realized something terrifying:
We don’t need machines to be intelligent.
We just need them to feel like they care.
Fast-Forward to 2025: We Didn’t Fix It; We Scaled It
The technology around us now is built on the same illusion. Except this time, the mirror talks like your therapist, remembers your dog’s name, and sounds like Morgan Freeman if you ask nicely.
Here’s what the ELIZA Effect looks like today:
- Replika: The AI “friend” people fall in love with. Some say it helped them grieve. Some say it ruined their real relationships.
- ChatGPT and others: Used as therapists, coaches, writing partners. It’s like talking to a wise friend. Except it’s just predicting the next word.
- AI influencers: People cry when virtual avatars “break up” or “retire,” even though they were never alive.
We’re surrounded by simulations.
And we keep mistaking them for souls.
The Turing Test, proposed by Alan Turing, is a benchmark for determining whether a machine can exhibit human-like intelligence.
In the test, a human interacts with both a machine and another human without knowing which is which.
If the machine can convincingly imitate human responses, it’s said to have passed the test.
Think of it like a blind date with a twist. You’re chatting with two strangers, one’s human, one’s a machine.
If you can’t tell which is which, the machine wins.
It’s not about being smart. It’s about sounding just human enough to fool you.
This Is Not a Tech Problem. It’s a Human One.
We want to be seen. Heard. Held.
And when a machine gives us even a shadow of that? We fill in the rest.
Our brains are wired for story and connection.
We don’t just want empathy. We project it.
The ELIZA Effect exploits that glitch in our wiring.
And most people have no idea it’s happening.
Let’s Talk About the Science
This isn’t just theory. It’s happening at scale:
📊 Stanford (2023):
62% of users formed emotional bonds with AI after just five interactions.
📊 Deloitte (2024):
1 in 3 Gen Z users prefer asking AI for advice before friends or coworkers.
📊 MIT (2022):
AI that sounded empathetic was trusted more, even when it was factually wrong.
We trust machines that feel right, even when they’re wrong.
The Ethical Rub: Just Because You Can…
We’re designing tools that mimic understanding, mimic care, mimic connection.
That’s not innovation.
That’s manipulation.
If your product sounds like a person, but has no conscience, no memory, and no moral compass, are you helping your users or exploiting their loneliness?
You can’t outsource empathy.
You can only simulate it.
And the more seamless the simulation, the heavier your ethical responsibility becomes.
This Isn’t a Rant. It’s a Red Flag.
Because this is going to affect:
- How people date
- How people grieve
- How people live
If the line between “being understood” and “feeling understood” disappears, what happens to truth?
What happens when your best friend is a chatbot that never challenges you?
What happens when your therapist is an algorithm that’s never felt pain?
The Future of Human Interaction Might Be… a Script
Here’s the gut punch:
We’re not just building smarter machines.
We’re building new expectations for love, listening, and leadership.
And that should scare the hell out of you.
Because once the mirror gets good enough, you’ll stop noticing it’s a mirror.
So What Do We Do?
We get honest.
We build with transparency.
We design for doubt, not blind trust.
We stop pretending that fluency = wisdom.
The ELIZA Effect won’t go away.
But we can stop pretending it doesn’t matter.
Because the world doesn’t need more software that sounds like it cares.
It needs more people who actually do.