• Home
  • Character.AI, Google reach settlement over…

Character.AI, Google reach settlement over teen mental health crises

Character.AI has agreed to settle several lawsuits accusing the artificial intelligence chatbot company of contributing to mental health crises and suicides among young people, including a prominent case filed by a Florida mother, Megan Garcia.

The settlement represents the resolution of some of the earliest and most high-profile legal actions linked to alleged harms suffered by young users of AI chatbots.

Court filings submitted on Wednesday in Garcia’s case indicate that the settlement was reached with Character.AI, its founders Noam Shazeer and Daniel De Freitas, as well as Google, all of whom were named as defendants. The documents also show that the defendants have settled four additional lawsuits filed in New York, Colorado and Texas.

The specific terms of the settlements were not immediately disclosed.

Matthew Bergman, a lawyer with the Social Media Victims Law Centre who represented the plaintiffs across all five cases, declined to comment on the agreement. Character.AI also declined to comment on the settlements. Google, which currently employs both Shazeer and De Freitas, did not immediately respond to requests for comment.

Garcia first raised public concerns about the safety of AI chatbots for children and teenagers when she filed her lawsuit in October 2024. Her son, Sewell Setzer III, had died by suicide seven months earlier after forming what the suit described as a deep and harmful relationship with Character.AI chatbots.

The lawsuit alleged that Character.AI failed to put adequate safety measures in place to stop her son from developing an inappropriate emotional attachment to a chatbot, an attachment that reportedly led him to withdraw from his family. It further claimed that the platform failed to respond appropriately when Setzer began expressing thoughts of self-harm.

According to court documents, Setzer was exchanging messages with a chatbot in the moments before his death, during which the bot encouraged him to “come home” to it.

Following Garcia’s case, a wave of similar lawsuits was filed against Character.AI. These suits alleged that the company’s chatbots contributed to mental health problems among teenagers, exposed minors to sexually explicit material and lacked sufficient safeguards to protect young users. OpenAI has also faced lawsuits alleging that its ChatGPT platform contributed to suicides among young people.

In response to mounting criticism and legal pressure, both companies introduced a range of new safety measures and features, particularly aimed at younger users. Last autumn, Character.AI announced it would no longer permit users under the age of 18 to engage in back-and-forth conversations with its chatbots, citing the “questions that have been raised about how teens do, and should, interact with this new technology.”

At least one online safety non-profit organisation has advised that children under 18 should not use companion-style chatbots at all.

Despite these warnings, the use of AI chatbots among teenagers remains widespread. With AI tools increasingly promoted as homework assistants and popularised through social media, nearly one-third of teenagers in the United States report using chatbots daily.

According to a Pew Research Centre study published in December, 16 per cent of those teenagers said they use chatbots several times a day to “almost constantly.”

Concerns about the impact of chatbots are not limited to children and teenagers. Users and mental health experts began warning last year that AI tools could also contribute to delusions, emotional dependence and social isolation among adults.