Is “empathic AI” just emotional catfishing?

Photo: Daiei Onoguchi – Own work, CC BY-SA 4.0,

You’ve heard this story before: Someone finds their true love online. It seems too good to be true. (Tell me you don’t already see where this is going and I could sell you a bridge in my hometown, Brooklyn.) This new partner is attentive, validating, and wants to meet, but then tragedy strikes. Someone’s new love was robbed at the airport on their way to meet for the first time. Can you send them a few thousand dollars until they can replace their passport and bank card?

Can you put a price on love? Maybe not, but the someone in this story didn’t find love; they were scammed. No matter how intense or real someone’s feelings were, this was never true love, but the victim’s feelings were genuine even if the relationship wasn’t. There’s the pain of lost love, but also loss of confidence in one’s own judgment. Catfishing crimes are underreported because victims experience shame: 81% of victims feel too ashamed to file a report with the FBI or FTC. Therapy includes addressing shame, processing betrayal trauma, and slowly rebuilding the capacity for trust.

But what if the therapist, like the fictional lost love, is also not a real person? What if the therapist is an AI chatbot? Does it matter, as long as they say the right things? Can a Large Language Model (LLM) incapable of feeling meaningfully validate the reality of emotions, or is this just another catfishing scam? If we agree that empathy is the ability to understand and share feelings, and therefore non-feeling entities like LLMs are incapable of empathy, what should we humans call it when AI provides what feels like empathy? Like the true love that wasn’t true love, is there danger in calling it empathy when it looks like empathy but isn’t?

I think the danger is considerable. There are a lot of inaccurate words being written about AI-assisted therapy and empathy. Some of it comes from the tech sector, like the folks at Google and Character.AI who falsely promoted chatbots that “see you, hear you and understand you.” They had a vested interest in upselling their product, which came with a heavy price tag, like the suicide of Sewell Setzer III (among others) and the subsequent lawsuit that went to settlement in January 2026. Academics are also using misleading language. Researchers from Northwestern (with others) recently published on “empathic AI,” for example, but “empathic” means “having empathy,” and LLMs cannot have, understand or share feelings.

I suspect that some of the inaccuracy comes from innocently trying to find linguistic shorthand. “Empathic AI” rolls off the tongue a lot more easily than “AI that generates responses that trained raters scored as empathy-adjacent on a Likert scale.” But linguistic shorthand may be more than mere convenience. Incentive structures systematically reward overclaiming in academia just as in the commercial marketplace. Researchers need funding and institutions want commercially viable outputs like developers need paying users. But false marketing is still false marketing, whether it’s purposeful hucksterism or a convenient linguistic shortcut or something in-between. As with any false marketing, consumers are typically the ones who pay the price.

Currently, a baby macaque named Punch is breaking hearts on the internet. Abandoned by his mother where they live in a Japanese zoo, Punch was given a soft Ikea stuffed toy monkey that has become his constant companion and comfort, echoing Harlow’s work on attachment in the 1950s. Is AI-simulated empathy the therapy version of Harlow’s terry-cloth mother or Plush’s toy companion? In devaluing the importance of actual human-to-human connection in therapy, are we repeating the radical behaviorism of John Watson, whose skepticism about the importance of mothering gave rise to Harlow’s work?




I’m all for embracing technology and the ways it can help us live better. I’m enamored with Claude, which provides editing assistance that’s sometimes terrific, coming up with a turn of phrase or making linguistic connections that look a lot like insight. Here’s what Claude says about appearing insightful:

“The most accurate wording of what I’m doing is probably something like: generating responses that pattern-match to insightful analysis, in ways that a knowledgeable reader may find useful or illuminating, without my being able to verify that understanding, judgment, or anything resembling genuine comprehension underlies those responses.”

Harlow never claimed that a terry-cloth mother was “the real deal.” But lots of research is now being written in language that muddies the distinction between real and seemingly. The Northwestern article, one example of many, mixes factual reporting (phrases like “responses that people perceive as empathic”) with outright faulty logic like this: “We intentionally do not consider empathy-as-a-trait… because empathy-as-a-trait in LLMs can lead to a paradox of semantics.” The “paradox of semantics” they’re trying to avoid only exists if you’ve already allowed that LLMs can have empathy. If you start from the accurate position — that LLMs produce empathy-resembling outputs without possessing empathy as a capacity — there’s no paradox, just description. Sidestepping this manufactured paradox is like backing yourself into a corner with your own words, then declaring the corner off-limits instead of rewording.

Anyone who has studied Harlow’s work remembers that the infants with terry-cloth mothers were emotionally healthier than those stuck with wire-mesh ones. They derived real comfort from their terry-cloth stand-ins, and were able to confront a fear stimulus while their peers with wire-mesh proxies cowered in terror. Therapy bots may be the cloth mother of mental healthcare — warmer than nothing, actively preferred over the cold dispensary of a purely transactional interaction, but still not the thing itself. Let’s not forget that those terry-cloth-comforted infants still had developmental and emotional issues compared to those raised by an actual living mother; those problems just showed up later. Let’s be cautious and deliberate in our research questions and our language. Let’s not lose the distinction between real and seemingly, and unwittingly catfish ourselves.

 

Dr. Matthew Romanelli

Dr. Matthew Romanelli is a psychiatrist with nearly forty years of clinical experience spanning hospitals, clinics and organizations.  He is currently in private practice and is a long-time resident of Brooklyn, New York. He completed his undergraduate degree at Yale, medical school at Washington University in St. Louis, Missouri and residency at SUNY Stony Brook.

Throughout his career Dr. Romanelli has leaned into complex and difficult stories, including addiction, trauma, psychosis, mood disorders, ADHD, head trauma and autism. A typical client in his practice has more than one diagnosis, so healing begins with a careful and thorough inventory of a patient’s full story, identifying inter-related problems, establishing treatment priorities and, with his clients, re-authoring a health narrative that resonates and provides a path toward healing.

Dr. Romanelli is currently working on a book, tentatively titled “Health Stories that Work,” to help patients become  active participants in co-authoring their health narratives, along with their providers, for better understanding and better care.

 

Dr. Matthew Romanelli

Dr. Matthew Romanelli is a psychiatrist with nearly forty years of clinical experience spanning hospitals, clinics and organizations.  He is currently in private practice and is a long-time resident of Brooklyn, New York. He completed his undergraduate degree at Yale, medical school at Washington University in St. Louis, Missouri and residency at SUNY Stony Brook. Throughout his career Dr. Romanelli has leaned into complex and difficult stories, including addiction, trauma, psychosis, mood disorders, ADHD, head trauma and autism. A typical client in his practice has more than one diagnosis, so healing begins with a careful and thorough inventory of a patient’s full story, identifying inter-related problems, establishing treatment priorities and, with his clients, re-authoring a health narrative that resonates and provides a path toward healing. Dr. Romanelli is currently working on a book, tentatively titled “Health Stories that Work,” to help patients become  active participants in co-authoring their health narratives, along with their providers, for better understanding and better care.  

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.