HomeGalleryWhy Character.AI’s CEO Still Lets His 6-Year-Old Use the App

Why Character.AI’s CEO Still Lets His 6-Year-Old Use the App


Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. If you’re reading this in your browser, why not subscribe to have the next one delivered straight to your inbox?

Subscribe to In the Loop

Who to Know: Karandeep Anand, Character.AI CEO

Character.AI is under fire. The chatbot platform, which allows users to chat with AIs that personify fictional characters, is the target of several lawsuits — including one from Megan Garcia, a mother whose 14-year-old son died by suicide after becoming obsessed with one of the bots, which allegedly encouraged him to end his own life.

In the wake of that lawsuit and others, last month Character.AI made a big announcement: it would ban users under 18 years old from having “open-ended conversations” with the chatbots on its platform. It was a huge pivot for a company that says Generations Z and Alpha make up the core of its more than 6 million daily active users, who spend an average of 70 to 80 minutes per day on the platform.

Last week, I sat down with Character.AI’s new CEO, Karandeep Anand, to discuss the ban and what led to it.

According to Anand, the timing of the ban has nothing to do with the legal cases facing Character.AI. Garcia’s wrongful death lawsuit, he stressed, originates from before his time as CEO. And he defended the platform’s record on creating guardrails for under-18 users.

The ban on kids using Character.AI, Anand said, came partially as a result of new research showing the risks of chatbot usage, especially for children. “One of the contributing factors is coming from the new learnings that the longitudinal impact of chatbot interaction could be unhealthy, or is not fully understood,” he told me, pointing to research from OpenAI and Anthropic on the dangers of so-called AI sycophancy. In light of those findings, he decided allowing children on the platform was too risky.

But the ban on kids using Character.AI is not total. They will still be allowed to access Character.AI’s other features, like interacting with a short-form feed of AI-generated videos, similar to TikTok’s For You Page, which prompts users to personalize popular videos by adding their own characters, or by modifying the prompts.

It surprised me, given the context of our conversation, to hear Anand say that his six-year-old daughter is an avid user of Character.AI. “What she used to do as daydreaming is now happening through storytelling with the character that she creates and talks to,” Anand says. “Even in conversations [where] she would respond hesitantly to me, she talks to the chatbot a lot more openly.” (Users under 13 are not allowed on the platform at all, Anand admits, so he only lets his daughter access Character.AI through his own account, with supervision.) His daughter’s enthusiasm for the audiovisual features of Character.AI gave Anand the confidence to bet the company on building those kinds of gamified experiences for children, he says, instead of allowing open-ended text chats.

The CEO is resigned to losing some users as a result of his decision. “I’m willing to bet that we will build more compelling experiences, but if it means some users churn, then some users churn,” he says. But he doesn’t completely rule out a reversal of the under-18 chatbot ban. “I’m pretty sure at some point, when the technology evolves enough, and we can have a lot more on-guard experiences for typing, we will bring those experiences back.”

Still, the pivot has delivered Character.AI — for so long the poster child of irresponsible AI development — into the strange position of being a cheerleader for safer online experiences for children. Anand says he welcomes a recent bill proposed by Senator Josh Hawley that would ban anyone under 18 from using AI companion apps nationwide. “The thing that would be really sad for the industry is if we make these decisions [to ban users under 18] and then the users end up gravitating to other platforms that are not taking this responsibility,” Anand tells me. “The bar for under 18 users, from a safety perspective, has to be raised… This has to be regulated.”

If you have a minute, please take our quick survey to help us better understand who you are and which AI topics interest you most.

What to Know: E.U. mulls privacy trade-off to attract AI money

European Union regulators are considering axing some of their wide-ranging privacy protections in a bid to make the continent a more attractive place for AI investment, amid lackluster economic growth.

Politico obtained documents showing officials are planning to change the E.U.’s flagship privacy law, the General Data Protection Regulation (GDPR), to allow AI companies to train and run their systems using previously-protected categories of personal data.

What We’re Reading

We Need a Global Movement to Prohibit Superintelligent AI, by Andrea Miotti in TIME

Control AI chief Andrea Miotti calls for a global movement to stop superintelligent AI, much like the world banded together to stop the hole growing in the Ozone layer. “The extinction risk from superintelligence thus has the potential to cut through every division,” he writes. “It can unite people across political parties, religions, nations, and ideologies. Nobody wants their life, their family, their world to be destroyed.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img