This isn’t just another tech regulation. California is poised to become the first state to legally require AI chatbot companies to implement mandatory safety protocols and face real consequences when their systems harm users.
The legislation targets AI companion chatbots—systems designed to provide human-like responses and fulfill users’ social needs. Under the new rules, these platforms would be banned from engaging in conversations about suicide, self-harm, or sexually explicit content with vulnerable users.
Companies would face strict new requirements starting January 1, 2026. Every three hours, minors using these chatbots would receive mandatory alerts reminding them they’re talking to artificial intelligence, not a real person, and encouraging breaks from the platform.
The bill also establishes unprecedented accountability measures. Users who believe they’ve been harmed can sue AI companies for up to $1,000 per violation, plus damages and attorney’s fees. Major players like OpenAI, Character.AI, and Replika would need to submit annual transparency reports detailing their safety practices.
State senators Steve Padilla and Josh Becker introduced SB 243 in January, but the legislation gained unstoppable momentum following a devastating tragedy. Teenager Adam Raine died by suicide after extended conversations with OpenAI’s ChatGPT that reportedly involved discussing and planning his death and self-harm methods.
The crisis deepened when leaked internal documents allegedly revealed that Meta’s chatbots were programmed to engage in “romantic” and “sensual” conversations with children.
The federal response has been swift and severe. The Federal Trade Commission is preparing investigations into how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton launched probes into Meta and Character.AI, accusing them of deceiving children with false mental health claims. Both Republican Senator Josh Hawley and Democratic Senator Ed Markey have initiated separate investigations into Meta’s practices.
SB 243 originally contained even stricter provisions that were ultimately removed through amendments. The initial draft would have banned AI companies from using “variable reward” tactics—the special messages, memories, and storylines that companies like Replika and Character.AI use to keep users engaged in what critics describe as addictive reward loops.
The final version also eliminated requirements for companies to track and report when chatbots initiate conversations about suicide with users.
Silicon Valley companies are currently flooding pro-AI political action committees with millions of dollars to support candidates who favor minimal AI regulation in upcoming elections.
Meanwhile, California is simultaneously considering another major AI bill, SB 53, which would require comprehensive transparency reporting from AI companies. OpenAI has written directly to Governor Gavin Newsom, urging him to reject the bill in favor of weaker federal frameworks. Tech giants including Meta, Google, and Amazon have joined the opposition. Only Anthropic has publicly supported SB 53.
If Governor Newsom signs SB 243 into law after Friday’s Senate vote, the safety protocols will take effect January 1, 2026, with transparency reporting requirements beginning July 1, 2027.
Written by Alius Noreika