AI-powered chatbot platform Character AI is introducing “stringent” new safety features following a lawsuit filed by the mother of a teen user who died by suicide in February. The measures will include “improved detection, response and intervention related to user inputs that violate our Terms or Community Guidelines,” as well as a time-spent notification, a company spokesperson told Decrypt, noting that the company could not comment on pending litigation. However, Character AI did express sympathy for the user’s death and outlined its safety protocols in a blog post Wednesday. “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” Character.ai tweeted. “As a company, we take the safety of our users very seriously.”
In the months before his death, 14-year-old Florida resident Sewell Setzer III had grown increasingly attached to a user-generated chatbot named after Game of Thrones character Daenerys Targaryen, according to the New York Times. He often interacted with the bot dozens of times per day and sometimes exchanged romantic and sexual content. Setzer communicated with the bot in the moments leading up to his death and had previously shared thoughts of suicide, the Times reported.
Setzer’s mother, lawyer Megan L. Garcia, filed a lawsuit Tuesday seeking to hold Character AI and its founders, Noam Shazeer and Daniel De Freitas, responsible for her son’s death. Among other claims, the suit alleges that the defendants “chose to support, create, launch, and target at minors a technology they knew to be dangerous and unsafe,” according to the complaint. Garcia is seeking an unspecified amount of damages. Google LLC and Alphabet Inc. are also named defendants in the suit. Google rehired Shazeer and De Freitas, both of whom left the tech giant in 2021 to found Character AI, in August as part of a $2.7 billion deal that also included licensing the chatbot startup’s large language model.
Along with other safety measures, Character AI has “implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation,” the company’s statement said. It will also alter its models “to reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years old. Character AI is one of many AI companionship apps on the market, which often have less stringent safety guidelines than conventional chatbots like ChatGPT. Character AI allows users to customize their companions and direct their behavior.
The lawsuit, which comes amid growing concerns among parents about the psychological impact of technology on children and teenagers, claims that his attachment to the bot had a negative effect on his mental health. Setzer received a diagnosis of mild Asperger’s as a child and had recently been diagnosed with anxiety and disruptive mood dysregulation disorder, the Times reported. The suit is one of several moving through the courts that are testing legal protections provided to social media companies under Section 230 of the Communications Decency Act, which shields them from liability associated with user-generated content. TikTok is petitioning to rehear a case in which a judge ruled that it could be held liable after a 10-year-old girl died while trying to complete a “blackout challenge” that she saw on the app. It’s the latest problem for Character AI, which came under fire last month for hosting a chatbot named after a murder victim