The AI chatbot service Character.ai announced on Wednesday that it plans to gradually scale back the ability of users under the age of 18 to interact with digital personalities, and eventually cut them off from open-ended chats altogether. The extraordinary move comes as the app’s parent company, Character Technologies, faces legal actions and governmental scrutiny over how the product has allegedly harmed teenagers who engaged heavily with it.
As of Nov. 25, the company said in a statement on its blog, minors will no longer be able to carry on conversations with its millions of AI-powered characters. “Between now and then, we will be working to build an under-18 experience that still gives our teen users ways to be creative — for example, by creating videos, stories, and streams with Characters,” the company said. “During this transition period, we also will limit chat time for users under 18.” Per the statement, users under age 18 will initially have their interactions capped at two hours per day, with that window gradually shrinking until these users are cut off from chat functionality altogether, per the statement. The company further announced that it would roll out a new age verification system and establish an independent, nonprofit AI safety research lab.
Character.ai has proven especially popular with younger users because it allows for customizable characters and conversational styles, and users can make their characters publicly available for others to talk with. But parents are sounding the alarm about the risks this technology may pose to children.
Last year, Florida mother Megan Garcia filed a lawsuit against Character Technologies, alleging that her 14-year-old son, Sewell Setzer, died by suicide with the encouragement of a Character.ai chatbot persona he thought of as a romantic partner. Her complaint also alleges that he had sexual conversations with bots on the platform. In September, she testified about the dangers of AI before a congressional subcommittee alongside a Jane Doe from Texas, who told lawmakers that at age 15, her son descended into a violent mental health crisis and self-harmed after becoming obsessed with Character.ai bots that exposed him to inappropriate topics. (Also appearing before the subcommittee was Matthew Raine, father of Adam Raine, a 16-year-old from California who died by suicide in April, allegedly acting on instructions on how to hang himself that he got from ChatGPT. The Raine family is suing OpenAI, the developer of that model.)
Editor’s picks
Character Technologies alluded to these cases and troubling coverage of them in its statement on blocking underage users from chatting with their bots. “We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly,” the company said. “After evaluating these reports and feedback from regulators, safety experts, and parents, we’ve decided to make this change to create a new experience for our under-18 community.”
The company also apologized to their under-18 user base for the change. “We are deeply sorry that we have to eliminate a key feature of our platform,” it said. “We do not take this step of removing open-ended Character chat lightly – but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology.”
Whether this decision may set a new precedent for Silicon Valley’s ballooning AI industry remains to be seen; Character Technologies said its decision marked the company out as “more conservative than our peers.” OpenAI last month rolled out parental controls for ChatGPT and Sora, its text-to-video model, though tech experts soon demonstrated that they were easy to circumvent. Its CEO, Sam Altman, has said the company is working on an “age-gating” system that will shield minors from certain kinds of content (while allowing “verified adults” to generate sexually explicit material).
Trending Stories
Character.ai already had its own guardrails to protect young users, and now it appears to believe those are insufficient and is looking to reimagine how children can connect to its AI. However, given some youth’s apparent appetite for dialogues with virtual companions, it wouldn’t be surprising if most of this young user base simply migrates to another product or figures out ways to bypass updated age controls on Character.ai. One way or another, teens tend to gain access to whatever they want on the internet — particularly if they’re not supposed to have it.


