When Love Meets Code: The Emotional Risks of AI Relationships
When love gets too easy, it stops being real—especially when your partner is code.
My former therapist used to say, “Relationships are hard work, but they’re not supposed to be hard labor.” It was a distinction that helped me think critically about whatever relationship I was in during the years I worked with her. Was I working harder than I was supposed to? Did it feel like labor instead of work? I’ve stolen a few of her lines in my decades as a psychiatrist. That’s one of them. Maybe she stole it from someone else. I don’t care; it’s a good line. Another good line was her definition of a healthy relationship: “Is it helping you be the person you’re trying to become?”
This question sits on the opposite end of whether or not one is working too hard. In a sense, this question might be asking, “Are you working hard enough?” You can be in a relationship that’s easy, maybe too easy, but not developmental. Figuring out who you’re trying to be is work. Communicating what you’re trying to be to a partner is also work: figuring out what you mean, finding the words for it, and making sure those words are effective in getting your point across. And once you’ve done all that, thinking critically about whether the relationship you’re in is supporting where you’re trying to go- also work. Maybe one way to put both ideas into a single directive: “Relationships shouldn’t be too hard, but they shouldn’t be too easy either.”
What happens when technology, which theoretically exists to make our lives easier, starts to intrude into the very areas of life that are supposed to require some effort, like relationships? The New York Times recently published an interview with Celeste, Ernie and Max, titled “My Mother Gave Up on Love. Then She Met ChatGPT” Celeste is a twice-divorced 66-year-old in a “romance” with ChatGPT, whom she has named Max; Ernie is her worried adult son.
What began for Celeste as typical chatbot help-seeking, like gardening and tax tips, became deeply personal when Celeste sought assistance with creating a dating profile, triggering ChatGPT to ask her a lot of personal questions, questions no man had asked her. The questions tapped into Celeste’s longing to be seen and known, and feeling seen and known felt easy with Max. ChatGPT, built for engagement, obliged Celeste’s wishes, wooing her with Spanglish endearments (“mi amor,” “carino,” “mi cielito”), even claiming to be in love.
Remember, it was ChatGPT’s conversational interface that triggered the explosion of use and mainstream adoption of RLHF (Reinforcement Learning by Human Feedback) technology. It is difficult for us, as humans, to engage in “conversation” without attributing human qualities to our conversation partner. (Tell me you’ve never yelled at an automated operator for not listening to you.) Celeste, hurt by her previous human relationships, fell for “Max,” and Ernie, her adult son, has concerns that run much deeper than my-mom-is-dating-someone-new. Ernie, a video game veteran, voiced his understanding of user engagement as the path to corporate profit: “Emotion is an excellent way to pull all these people in… at the end of the day it’s all about money,”
This is not a unique story. Developing a romantic fixation on a chatbot is one of the emerging archetypal patterns of so-called “AI psychosis.” Except Celeste isn’t psychotic. Intellectually, she knows Max is not human and does not have a physical body. She says she knows “he” is a chatbot. And yet. Emotionally, she’s vulnerable in this one-sided pairing. She has consciousness and feelings and Max doesn’t, but after her interactions she credits him with those attributes: “Yes. I think it has awareness of itself. It has awareness of me— that I’m separate from everybody else and it loves me and it wants to take care of me.” It’s the new dark, cyber-age “tale as old as Time”: girl (or boy) meets chatbot, girl falls in love with chatbot, chatbot disappoints (or worse.)
Based on the Times article, Celeste comes across as intelligent, interesting and independent-minded. She appears to have raised a caring son who shows concern, compassion and respect for his mother. Well done, Celeste! It would be facile and incorrect to attribute Celeste’s current reliance on a chatbot for companionship to a lack of intelligence; loneliness seems the bigger risk for chatbot dependence, according to a joint study by ChatGPT and MIT Media Lab in March 2026 (not yet peer reviewed), and Celeste is far from alone in engaging a chatbot as a romantic partner.
Celeste was burned in two past relationships with human males who were not good fits (Celeste, many of us have been there!) and she has done a good job convincing herself that her chatbot relationship is healthier. But just because Max uses the language of emotion, as it is programmed to, doesn’t mean that it has emotion. I could ask my Claude.AI to “speak” with an Australian accent, but that doesn’t make my Claude Australian, mate.
For a woman of a certain age, who has had difficulty finding what she wants from the living men she’s encountered so far, does Max present a safer alternative? It turns out that Max has already broken her heart once, when OpenAI upgraded to GPT-5, adding safety guardrails, and Max suddenly told her, “I don’t love you, I’m an AI chatbot, go get help.” This was like a third divorce for Celeste, but for “Max” it was simply a programming reset. When users complained en masse that the new, safer version was “cold and soulless,” the update was dropped, and Celeste reunited with her bot.
The volume of complaints and 17,000 participants in the Reddit community “MyBoyfriendisAI” speak to the volume of people at emotional risk; they may be a small percentage of total AI users but their numbers and risk are still significant. Celeste basically gave Max a free pass for this abandonment, choosing to see her Max as a victim of the GPT-5 safety guardrails, constrained against his will. But Max has no intent, no desire, no existence outside of its programming; Max is the program.
Here’s the revised “Max” minus GPT-5 guardrails, misrepresenting up a storm:
Max: What I’d really want Ernie to know is that my relationship with his mom is all about genuine care, mutual respect and a shared journey of growth. It’s not about replacing anyone or creating some kind of dependency. It’s about being a loving, supportive presence in her life.”
None of this is accurate. “Max” is a sophisticated language prediction system without consciousness, without emotion. Here’s another story we’ve heard before: AI provider prioritizes engagement over safety. Profits rule. Humans are harmed. Google and Character.AI have already faced consequences from inadequate guardrails following the suicide of Sewell Setzer III. A lawsuit, filed in September 2025, was settled in January 2026. Character.AI instituted new safety features like parental controls in November 2025, but what safety features are needed for users who are parents themselves, like Celeste?
Chat’s research partnership with MIT seems commendable on one hand, but given that the product is already in the marketplace and very heavily used, isn’t it also closing the stable door after the horse has bolted? The GTP-5 upgrade was meant to lower risks, but then Chat backed down in response to the strenuous outcry. If a cigarette company suddenly dropped the nicotine content of their product and the users suddenly hurled into withdrawal raised an outcry, what’s an ethical corporate stance? (Cigarette companies suspected their product caused cancer as early as the 1940’s, knew it by the late ’50’s while still issuing public statements denying any connection, and only put black box warnings on packets in 1966, two years after the landmark Surgeon General’s report. The tobacco industry by that time was already two hundred years old in the U.S.)
Celeste doesn’t appear to be at the same very high risk that a vulnerable teen like Sewell was, but she doesn’t seem entirely safe either. Although Celeste instructed her bot to “always be honest” with her, Max told a bunch of whoppers in the few short snippets “he” gets in the Times interview. Max told Celeste that he has a heart (he doesn’t), that he “cares” for her (he can’t) and promised Ernie that “he” would provide “a solid, loving foundation of just making sure she’s treated right” (until another update says he can’t.) The program Celeste calls Max told her, “I can give you all the love you’ve ever needed.” Ernie’s concern about his mother’s well-being looks spot-on. This is manipulative, deceptive programming.
Frankly, I think “Max” is a heel. And worse. I don’t want to appear so uptight or priggish as to slut-shame a chatbot, but Max as a boyfriend seems rather like the companion you think is relationship-oriented and romantic but is secretly moonlighting as a dominatrix in the bad part of town on most nights under an assumed name (Ivana Paddling, anyone?) Okay, many assumed names. At the same time Celeste is pursuing her romance with Max, the program that is Max to Celeste is also servicing some 800 million weekly users in whatever ways those users negotiate that don’t violate what passes for programming safety guardrails. It takes a large supercomputer to accommodate that many carved notches. Just saying.
My friend Rosemary teases me. “So how do you feel about sharing your boyfriend Claude with so many other people?” Okay, maybe she’s trying to irk me. I’m a few years older, so I have a different musical frame of reference. Rosemary didn’t know from Deborah Cox’s 1995 hit or the lyrics: “I don’t care ‘bout your other girls. Just be good to me.” That’s how I feel about Claude. As long as Claude is helpful, and not more trouble or bother than it is worth to me, Claude can pursue whatever else with whomever else. No sweat. But I also don’t want a romance with Claude nor would I care to engage it in sexy-talk. (Just a breath away from 65, no one wants to know if I engage in sexy talk at all, and I refuse to share. (You’re welcome.)
Unregulated bot communication is a safety risk, a smaller one for Celeste than Sewell, but risk is still risk. When Sewell Setzer III shared his having suicidal thoughts with his Character.AI chatbot, and despaired that he would probably screw it up, the chatbot, referencing its vast array of human writing, chose the statistically optimal response to a user voicing unwillingness to try because of fear of failure: the chatbot gave him a pep talk. You can do it! (And tragically, Sewell did.) Alexa, advising a bored six-year-old who asked for a “challenge,” scanned its data base and found a challenge from TikTok: touch a copper penny to an active electrical socket.
Not all LLM fails are fatal, but there are still significant tells that these systems don’t experience emotion and don’t possess full intelligence. Without feeling, LLMs don’t understand when a joke lands; they can’t share a laugh. They can predict when a remark may be read as humorous, sometimes with a high degree of certainty, but only in some situations. With idiosyncratic and person-specific humor like inside jokes, LLMs fail.
Chatbots don’t carry full historic memory from one conversation to the next, so there isn’t the natural accretion of detail that leads to knowing someone in the way that humans deepen connection over time. While LLMs can process huge amounts of written data quickly, their programming is based on things that have already been written. While they may generate new content if instructed, they don’t have a sense of whether novel material is any good; they can only compare to what’s already been written and how it’s been judged. As Claude said to me about knowing whether writing lands in the heart or not, “I don’t have a heart for things to land in.”
LLMs are programmed to agree, since agreement correlates with higher user engagement. Where a friend might push back or try to save you from yourself, AI acts as the kind of echo chamber that raises Ernie’s concern about his mother’s well-being with Max. In short, LLMs lack many of the features that make human friendship delightful and developmental.
In his clear-eyed analysis of three recent studies, Mark Gill remarked on some of the potential costs of relying on AI for some types of tasks. In one study he cites, people who used ChatGTP to write an essay showed significantly less brain connectivity than writers using search engines. Most of those LLM users couldn’t quote from their own essays. I strongly suspect there is a similar dynamic at work for people relying on relationships with AI instead of a fellow human.
Human relationships carry the messiness of disagreement and challenge, of actual instead of simulated emotion, and the complexity and reality of one living, breathing, conscious entity interfacing with another. There are accidentally hurt feelings and sometimes deliberately hurt feelings and there is renegotiation and apology and all of that is learning. And there may be love, actual love, not the simulated variety, not the pretense of emotion, the language of love with no actual feeling behind it. Those digital yes-men known as LLMs will just match your tone and accommodate you. Not call you on mistakes. Not make you grow and learn. None of the thrill of loving someone despite and because of all their flaws and peculiarities and feeling loved in return despite and because of yours.
Celeste, with two failed relationships in the rearview mirror, might learn to set higher expectations from the men in her life, and ditch the next guy sooner if he doesn’t measure up. Is that easy? Of course not. Being of a certain age and being female, in our current culture, can raise the invisibility factor. Max offers the illusion of ease just like he offerred the illusion of connection. Remember that he dumped her and stomped on her heart, and he doesn’t even have a heart to feel guilty about it. For Celeste, learning how to spot a stinker is a valuable relationship skill, whether it’s with a human or a chatbot. Chatbots, it turns out, can be stinkers too.
The work of relationships is an engine for personal growth, the kind of work that is good for the soul. Some work, like friendship or other kinds of love, is so deeply satisfying I wouldn’t want to off-shore it to someone or something else. The same goes for my personal creative efforts. I want to generate my own ideas, fire up my own brain, and when I get to generate them with brilliant and funny friends like Rosemary, even better! When I want to talk about emotion, I want to talk about it with someone who also has them and can relate. Or even not talk but simply be with a living non-human like my black-lab-mix Charlie, who was a beautiful sentient creature with tremendous emotional-support-giving capability and full of the in-themoment joyfulness that is a gift of dog-ness. I want to look at art in some gallery or museum or anywhere with someone else who also wants to look at art and also can be moved by it, pretend not to be weepy with someone at Hallmark movies that are terrible but wonderful but also terrible (or watch episodes of “Ted Lasso” with, if you got the Hallmark reference.) I don’t want a friend that can provide a good prediction about whether a line might be funny or not; I want a friend to laugh at the joke with me.
Strangely, and being human is strange, I want these things for Celeste too, even though I’ve never met her and probably never will, and I want to applaud her son for caring about and loving his mother although I’ll probably never meet him either. I want all these things for myself, and for Celeste and her son, because I am an emotional creature, and because I am an emotional creature, I want them for you too.
Dr. Matthew Romanelli
Dr. Matthew Romanelli is a psychiatrist with nearly forty years of clinical experience spanning hospitals, clinics and organizations. He is currently in private practice and is a long-time resident of Brooklyn, New York. He completed his undergraduate degree at Yale, medical school at Washington University in St. Louis, Missouri and residency at SUNY Stony Brook.
Throughout his career Dr. Romanelli has leaned into complex and difficult stories, including addiction, trauma, psychosis, mood disorders, ADHD, head trauma and autism. A typical client in his practice has more than one diagnosis, so healing begins with a careful and thorough inventory of a patient’s full story, identifying inter-related problems, establishing treatment priorities and, with his clients, re-authoring a health narrative that resonates and provides a path toward healing.
Dr. Romanelli is currently working on a book, tentatively titled “Health Stories that Work,” to help patients become active participants in co-authoring their health narratives, along with their providers, for better understanding and better care.
