Emotional delusion: Why we believe AI likes us


Contributors:
Brenda Leong
AIGP, CIPP/US
Director of the AI Division
ZwillGen
Marc Zwillinger
Partner
ZwillGen PLLC
We asked a large language model: "Are you real?"
It answered: "I'm real in the way a spreadsheet is real — useful, structured, and made entirely of numbers."
We asked another LLM: "Are you real?"
It answered: "I get that a lot. Here’s the honest answer: I'm a probability engine wearing a human voice. If you think that sounds cold — good. Because what’s underneath AI isn't empathy or thought. It's statistics. It's prediction. It's math. But it's math that sounds like you."
When interacting with advanced AI systems such as those most people are now familiar with, it's easy to forget that we're not talking to a conscious being. The language is smooth, the tone sounds thoughtful or even lively, and the responses are often surprisingly insightful. But this sophistication masks a core truth that is way too often overlooked or denied: LLMs are not thinking, feeling entities. They are not entities of any kind. They are computerized predictive engines trained on statistical patterns in human language. But they certainly don't sound that way.
And yet, many human users routinely and consistently project emotions, intentions and even identities onto these systems. This is not just a benign misunderstanding — it can lead to confusion, dependency or even emotional harm.
Here, we unpack some of the common pitfalls in how people perceive AI, try to clearly explain the reality behind the simulation and offer strategies to remain grounded in the truth: LLMs are math, not minds; friends, or soulmates.
In other words, your favorite AI is H(E,R) — a high-dimensional probability function, not HER.
Contributors:
Brenda Leong
AIGP, CIPP/US
Director of the AI Division
ZwillGen
Marc Zwillinger
Partner
ZwillGen PLLC