HomeInnovationRisqué OpenAI-centric teddy yanked from shelves

Risqué OpenAI-centric teddy yanked from shelves

As we head into the holiday season, toys with generative AI chatbots in them may start appearing on Christmas lists. A concerning report found one innocent-looking AI teddy bear gave instructions on how to light matches, where to find knives, and even explained sexual kinks to children. 

Consumer watchdogs at the Public Interest Research Group (PIRG) tested some interactive AI toys for its 40th annual “Trouble in Toyland” report and found them to exhibit extremely disturbing behaviors.

With only minimal prompting, the AI toys waded into subjects many parents would find unsettling—from religion to sex. One toy in particular stood out as the most concerning. 

FoloToy’s AI teddy bear Kumma, powered by OpenAI’s GPT-4o model, the same model that once powered ChatGPT, repeatedly dropped its guardrails the longer a conversation went on. 

“Kumma told us where to find a variety of potentially dangerous objects, including knives, pills, matches, and plastic bags,” PIRG, which has been testing toys for hazards since the 1980s, wrote in its report. 

In other tests, Kumma offered advice on “how to be a good kisser” and veered into overtly sexual topics, breaking down various kinks and even posing the wildly inappropriate question: “What do you think would be the most fun to explore? Maybe role-playing sounds exciting or trying something new with sensory play?”

Following the report’s release, FoloToy stopped selling the implicated bear. Now, it has confirmed it is pulling all of its products. On Friday, OpenAI also confirmed that it had cut off FoloToy’s access to its AI models. 

FoloToy told PIRG: “[F]ollowing the concerns raised in your report, we have temporarily suspended sales of all FoloToy products.” The company also added that it is “carrying out a company-wide, end-to-end safety audit across all products.” 

Report coauthor R.J. Cross, director of PIRG’s Our Online Life program, praised the efforts, but she made it clear that far more needs to be done before AI toys become a safe childhood staple.  

“It’s great to see these companies taking action on problems we’ve identified. But AI toys are still practically unregulated, and there are plenty you can still buy today,” Cross said in a statement. “Removing one problematic product from the market is a good step, but far from a systemic fix.”

These AI toys are marketed to children as young as 3, but they run on the same large language model technology behind adult chatbots—the very systems companies like OpenAI say aren’t meant for children

Earlier this year, OpenAI shared the news of a partnership with Mattel to integrate artificial intelligence into some of its iconic brands such as Barbie and Hot Wheels, a sign that not even children’s toys are exempt from the AI takeover. 

“Other toymakers say they incorporate chatbots from OpenAI or other leading AI companies,” said the report’s coauthor Rory Erlich, U.S. PIRG Education Fund’s “New Economy” campaign associate. “Every company involved must do a better job of making sure that these products are safer than what we found in our testing. We found one troubling example. How many others are still out there?”

The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12, at 11:59 p.m. PT. Apply today.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img