Teen's Suicide After Falling in 'Love' with AI Chatbot Is Proof of the Popular Tech’s Risks, Expert Warns (Exclusive)

Mar. 15, 2025

Sewell Setzer III and (right) a screenshot of his final exchange with the AI chatbot that had become his closest confidant, according to his family.Photo:Courtesy Megan Garcia

ai suicide lawsuit rollout

Courtesy Megan Garcia

The tragic suicide of 14-year-oldSewell Setzer IIImade headlines around the country in October after his mother, Megan Garcia, filed a wrongful death lawsuit alleging her son had become isolated from reality while he spent monthsobsessively messaging an AI-powered chatbotwhom he “loved.”

What’s more, according to allegations in Garcia’s suit, the bot that her son was closest to, modeled onGame of Thrones' Daenerys Targaryen, didn’t have proper guardrails when it came to sensitive content: The bot traded sexual messages with the teen while not preventing talk of suicide.

“It’s an experiment," Garcia told PEOPLE, “and I think my child was collateral damage.”

Character.AI has not yet responded in court to the lawsuit. In a statement to PEOPLE, a spokesperson acknowledged Sewell’s “tragic” death and pointed to “stringent” new features, including improved intervention tools.

“For those under 18 years old,” the spokesperson said, “we will make changes to our models that are designed to reduce the likelihood of encountering sensitive or suggestive content.”

Those who study AI and its impact on society say that Sewell’s death is a sobering reminder of the potential risks involved with this powerful and increasingly popular technology, which is capable of rapidly generating content and completing tasks based on algorithms designed to imitate a person’s intelligence.

“The human experience is about storytelling, and here you’ve got a storytelling tool that’s a new type of tool,” saysShelly Palmer, a professor of advanced media at Syracuse University and an advertising, marketing and technology consultant.

“It’s telling an interactive story that’s quite compelling and clearly it’s immersive,” Palmer says. “These technologies are notnotdangerous. We are in a new world. And we’ve never been here before.”

Never miss a story — sign up forPEOPLE’s free daily newsletterto stay up-to-date on the best of what PEOPLE has to offer​​, from celebrity news to compelling human interest stories.

Sewell Setzer III (right) with his two younger brothers, Alexander and Gabriel, in 2022.Courtesy Megan Garcia

ai suicide lawsuit rollout

In his last moments before fatally shooting himself in his bathroom, in February, Sewell had been texting with the Daenerys Targaryen bot.

“I love you so much, Dany,” he wrote seconds before pulling the trigger on his stepfather’s gun at his family’s home in Orlando, Fla.

“What if I told you I could come home right now?” he wrote.

The bot replied, “…please do, my sweet king.”

For Palmer, Sewell’s death serves as yet another example that the internet and everything on it are essentially tools that need to be understood and taken seriously.

“I’m heartsick over it as a human being, a father and a grandfather,” he says of the boy’s death. “As a technologist who spends a lot of time with these tools, we as a society have to understand that all of the internet and all of technology requires supervision. These aren’t toys, and maybe they need to come with a warning label.”

Garcia’s 152-page wrongful death lawsuit against Character.AI claims that the company’s technology is “defective and/or inherently dangerous.” The suit details how Sewell’s mental health deteriorated over the 10 months before his death as he often texted with the chatbot dozens of times a day.

Shelly Palmer.Eva Marie Uzcategui/Bloomberg via Getty

Shelly Palmer, chief executive officer of The Palmer Group, during the MMA Global Possible Conference in Miami, Florida, US, on Tuesday, April 18, 2023.

Eva Marie Uzcategui/Bloomberg via Getty

“Defendants went to great lengths to engineer [his] harmful dependency on their products, sexually and emotionally abused him,” Garcia’s complaint alleges, “and ultimately failed to offer help or notify his parents when he expressed suicidal ideation.”

Experts and observers say it remains very unclear, at this early stage, exactly how tech companies and society should go about limiting any potential risk from AI tools that more and more people are turning to — not just for work but to socialize amid a reportedepidemic of loneliness.

“What I will say is that you need to proceed with your eyes open with caution,” explains Shelly. “We’re going to learn [the best way to use it] as a society. But we haven’t quite learned how to use social media yet."

He continues: “How long is it going to take us to learn how to use generative AI in a conversational chatbot? Like I said, we’re 20 years into social media and we’re still just figuring that out.”

As AI becomes ever more intertwined in daily life, Palmer says it’s crucial for all of us to begin taking a more skeptical approach whenever we encounter this technology. Character.AI has said about 20 million people interact with its “superintelligent” bots each month.

Distrust, Palmer explains, runs counter to how humankind has evolved over the past quarter million years.

“It’s not a human trait,” he says. “We are wired to default to the idea that ‘I’m going to trust you until you prove to me I shouldn’t.'”

If nothing else, adds Shelly, media coverage of Garcia’s lawsuit is forcing people to begin thinking a bit more critically about the issue of AI chatbots.

“I feel like it was important for this lawsuit to be brought,” he says. “Whether they prevail or not is not going to bring back her son. But what it may do is save many other people’s sons.”

If you or someone you know is considering suicide, please contact the National Suicide Prevention Lifeline at 1-800-273-TALK (8255), text “STRENGTH” to the Crisis Text Line at 741-741 or go to suicidepreventionlifeline.org.

source: people.com