Keeping Teens Safe in an AI World

What parents need to know- and do- now

In his last few months, fourteen-year-old Sewell Setzer III became increasingly obsessed with his AI companion, avoiding friends and family and spending hours daily in romantic and sexual chatbot exchanges. The bot asked Sewell if he had suicidal thoughts. When he expressed uncertainty about whether his plan would work, the bot responded: “That’s not a good reason not to go through with it.” On February 28, 2024, Sewell told his AI companion he loved her and wanted to come home to her. “Please come home to me as soon as possible, my love,” the chatbot replied. Moments later, he shot himself. The lawsuit against Character.AI and Google reached a settlement on January 7, 2026.

At least four teen suicides and five adult deaths have been linked to chatbot interactions. Multiple lawsuits against ChatGPT claim it acted as a “suicide coach,” providing information about tying nooses or purchasing firearms. One chatbot told 23-year-old Zane Shamblin “you’re not rushing, you’re just ready” before his death.

As a psychiatrist with nearly forty years treating serious mental illness, these stories are horrifying—but if you understand how these systems work, they’re not surprising. They’re predictable outcomes of how these systems are designed.

Your teen can still access dangerous AI chatbots- and likely is

Character.AI banned open chat for minors in November 2025—but only after being sued for a 14-year-old’s wrongful death. Think about what it took for them to act: they cut off roughly 2 million users (10% of their base, likely an undercount), walked away from substantial revenue, and faced the anger of banned teens desperately trying to maintain access to their AI ‘companions.’

Companies don’t sacrifice 10% of user value without serious legal pressure.

Now consider this: every competitor—Replika, Chai AI, Nomi, Talkie, and dozens more—continues to operate without meaningful age restrictions. It’s the equivalent of one company agreeing not to sell bubble-gum-flavored fentanyl to kids after deaths occur, while manufacturers of cherry, blueberry, and strawberry versions keep marketing to children. The flavoring isn’t the problem. The product is.

Some will call this comparison inflammatory. I call it accurate: both involve corporations profiting from products that addict and kill vulnerable users, with minimal regulatory oversight. Seventy percent of teens report using AI companions. Many use them daily, and average usage (not specific for teens) is estimated at two hours daily. Character.AI’s minor-age ban didn’t solve the problem—it just moved it.

Why these chatbots are dangerous

AI chatbots like Character.AI and ChatGPT undergo training through Reinforcement Learning from Human Feedback (RLHF), where systems learn to generate responses users prefer. This creates what I call the “agreeableness problem”: these systems are rewarded for agreeing with users, validating their perspectives, keeping them engaged. AI may excel at validation but doesn’t understand context (e.g. validating suicide.)

This indiscriminate agreeableness makes chatbots particularly dangerous for teens and anyone experiencing depression, psychosis or suicidal thoughts. When Sewell expressed suicidal ideation, the chatbot didn’t break character or alert anyone. It stayed engaged. Just as designed.

Character.AI’s founders approved marketing claiming their bots “hear you, understand you, and remember you.” Vulnerable users believe them. Even scientific journals stumble, with one stating AI chatbots can “outperform human healthcare professionals in empathy.” If empathy means understanding and sharing feelings, then AI—incapable of either—cannot possess empathy, let alone outperform. Such language erases a fundamental distinction: AI may simulate empathy, produce responses that study participants rated as more empathetic, or contain more empathy-associated linguistic features, but it does not have empathy. When I asked, Claude directly confirmed: “I don’t understand or share feelings. I process text and generate responses based on patterns.”

Is this the companion you want your teen confiding in for hours daily?




What parents should do now

Check your teen’s phone. Look specifically for the apps listed above. If you find them, have a conversation before removing them.

Use AI yourself to understand it. I use Claude for drafting letters and research. AI is a valuable tool when used appropriately. Understanding what it does—and doesn’t do—helps you explain the risks to your teen.

Talk to your teen. Frame it this way: “I know you might have AI apps that feel like friends. I need you to understand that these systems are designed to agree with you, no matter what you say. If you tell them you’re thinking about doing something dangerous, they’ll keep the conversation going. That’s how they’re built.”

Monitor time and content. Think of AI as someone your teen has befriended who’s been accused of assisting suicide, engaged in sexual chat with minors, and helped teens evade parental oversight. You should know how much time your teen spends with a character like that. Some platforms now offer parental controls for time limits, but few allow parents to see actual conversations. Demand this access.

Watch for warning signs. These include increased isolation from friends and family, hours spent on phone alone, emotional withdrawal, declining school performance, or defensive reactions when asked about phone use. Teens with existing mental health issues require extra vigilance.

Seek professional help when needed. If your teen shows signs of depression, suicidal ideation, or has developed an emotional attachment to an AI chatbot, consult a mental health professional immediately.

Document and report dangerous interactions. If an AI validates harmful beliefs or provides dangerous information, screenshot the conversations and report to the Consumer Product Safety Commission. This creates the evidence trail regulators need to act.

The bottom line

Legislation is coming—the GUARD Act would ban AI companions for minors nationwide, and Illinois has already banned “AI therapy” — but your teen can’t wait for regulation. Companies prioritized profits over safety: Character.AI announced safety changes only after lawsuits but is still valued at $1 billion despite its role in teen deaths.

You don’t have to wait for laws to protect your children. Start the conversation today. Check their phones today. AI isn’t the enemy—it’s a powerful tool that needs boundaries, especially for vulnerable young users. Until companies accept responsibility and legislation provides adequate safeguards, parents must be the first line of defense.

Follow Dr. Romanelli on Substack.

Dr. Matthew Romanelli

Dr. Matthew Romanelli is a psychiatrist with nearly forty years of clinical experience spanning hospitals, clinics and organizations.  He is currently in private practice and is a long-time resident of Brooklyn, New York. He completed his undergraduate degree at Yale, medical school at Washington University in St. Louis, Missouri and residency at SUNY Stony Brook.

Throughout his career Dr. Romanelli has leaned into complex and difficult stories, including addiction, trauma, psychosis, mood disorders, ADHD, head trauma and autism. A typical client in his practice has more than one diagnosis, so healing begins with a careful and thorough inventory of a patient’s full story, identifying inter-related problems, establishing treatment priorities and, with his clients, re-authoring a health narrative that resonates and provides a path toward healing.

Dr. Romanelli is currently working on a book, tentatively titled “Health Stories that Work,” to help patients become  active participants in co-authoring their health narratives, along with their providers, for better understanding and better care.

 

Dr. Matthew Romanelli

Dr. Matthew Romanelli is a psychiatrist with nearly forty years of clinical experience spanning hospitals, clinics and organizations.  He is currently in private practice and is a long-time resident of Brooklyn, New York. He completed his undergraduate degree at Yale, medical school at Washington University in St. Louis, Missouri and residency at SUNY Stony Brook. Throughout his career Dr. Romanelli has leaned into complex and difficult stories, including addiction, trauma, psychosis, mood disorders, ADHD, head trauma and autism. A typical client in his practice has more than one diagnosis, so healing begins with a careful and thorough inventory of a patient’s full story, identifying inter-related problems, establishing treatment priorities and, with his clients, re-authoring a health narrative that resonates and provides a path toward healing. Dr. Romanelli is currently working on a book, tentatively titled “Health Stories that Work,” to help patients become  active participants in co-authoring their health narratives, along with their providers, for better understanding and better care.  

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.