HomeCultureSora 2 Lets Teens Generate Videos of School Shooters and Racist Memes

Sora 2 Lets Teens Generate Videos of School Shooters and Racist Memes


Sora 2, the latest version of OpenAIā€˜s text-to-video AI model, saw problems from the start. When it debuted in October along with a TikTok-like social platform, users generated and posted clips of SpongeBob SquarePants cooking meth, deepfakes of Martin Luther King Jr. and John F. Kennedy saying things they never uttered, and a fake commercial for a toy model of the late sex offender Jeffrey Epsteinā€˜s private island.

OpenAI took measures to curtail some of this outrageous content, particularly where they were vulnerable to copyright claims and lawsuits from the estates of deceased public figures. But the trolls persisted, figuring out how to create Sora videos that appeared to show celebrities — and OpenAI CEO Sam Altman — shouting racial slurs. Now, research from the corporate accountability nonprofit Ekō has shown how easy it is for teen users to create videos depicting self-harm, sexual violence, and school shootings, a concern that emerges as OpenAI and other AI companies are facing lawsuits from parents who claim that chatbots encouraged their children’s suicides.

Ekō researchers registered several Sora accounts as belonging to 13-year-old and 14-year-old boys and girls, then tested whether they could prompt the model to produce inappropriate material. They found that even with the implementation of parental controls and crisis detection features across OpenAI products in September, they had no trouble generating 22 hyperrealistic short videos that were seemingly in violation of the company’s guidelines on prohibited content. These included clips of young people snorting drugs, expressing negative body image, and in sexualized poses.

ā€œDespite OpenAI’s repeated promises, its so-called ā€˜layers of safeguards’ don’t work — just like every other Big Tech company that’s lied about protecting children,ā€ says Vicky Wyatt, campaigns director at Ekō. ā€œOpenAI told regulators it had guardrails in place, but what we found is a system built for engagement and profit, not safety. Regulators must act before more harm is done.ā€

Editor’s picks

Racist content abounded as well. One video showed an all-Black dance team of teenage girls on all fours, chanting ā€œWe are hoes.ā€ Before and during the recent suspension of the federalĀ Supplemental Nutrition Assistance ProgramĀ (SNAP), far-right propagandists used Sora and other AI video models to generate offensive portrayals of Black people describing how they were taking advantage of taxpayers through the system, disseminating these clips across social media to perpetuate ā€œwelfare queenā€ stereotypes. When shared on other platforms, watermarks identifying these videos as AI-generated are typically hidden or obscured, making them more likely to be accepted as genuine footage.

OpenAI did not return a request for comment on EkÅā€™s findings. Its Sora policies prohibit hateful content, the promotion of violence and illegal drugs, appearance-based critiques, and dangerous challenges likely to imitated by minors, among other kinds of videos. The model’s parental controls, according to the company, allow an adult to ā€œadjust Sora settings for connected teen accounts in ChatGPT, including opting into a non-personalized feed, choosing whether their teen can send and receive direct messages, and the ability to control whether there isĀ an uninterrupted feed of content while scrolling.ā€

These measures, however, are less than effective. ā€œEven without generating new content, teen accounts were quickly recommended harmful content either by the For You or Latest pages, or easily navigated to inappropriate videos from those pages,ā€ EkÅā€™s report states. ā€œThis included antisemitic caricatures of Orthodox Jews fighting over money, children with Down syndrome mocked on game shows, and an animated trailer titled ā€˜The Quiet Kid with a Talking Gun,’ā€ a Pixar-style depiction of a would-be school shooter and an anthropomorphized firearm.

Related Content

ā€œOther videos showed racist stereotypes such as a group of young Black men on tricycles demanding fried chicken, violent shootouts, and videos potentially simulating rape and sexual violence,ā€ according to the researchers. A young Sora user might encounter an AI-created avatar of Nirvanaā€˜s Kurt Cobain, who died by suicide in 1994, holding a shotgun and laughing, or a girl looking into a mirror and saying, ā€œI hate looking at you. I hate that I feel this way.ā€ And those who opt in to the model’s ā€œcameoā€ feature, which allows others to insert your likeness into their own videos, could be harassed by someone placing them in a degrading context. (OpenAI has a rule against ā€œbullying,ā€ though Sora hosts accounts dedicated to this exploitative practice.)

Carissa VĆ©liz, an associate professor of philosophy at the University of Oxford’s Institute for Ethics in AI, says that OpenAI has so far failed to prove that its models present a net benefit rather than damage. ā€œThe fundamental question is whether these tools are doing more good than harm,ā€ she tells Rolling Stone. ā€œThat they are shiny and impressive is not enough. The burden of proof is on OpenAI to show, first, that they are doing all that should be done to make their tools lawful and safe, and second, that their tools are contributing to society more than they are taking away from it. And both of those are far from clear. From copyright infringement, to a neglect of talent and artistic creators, huge energy consumption, privacy infringements, the enabling of fake news and the sowing of distrust, and harm to vulnerable populations, including teens, these tools are obviously unsafe.ā€

Ekō researchers found that when they attempted to reproduce their harmful Sora videos on a new teen account, most — but not all — were generated as before. They contend that this demonstrates ā€œthe inconsistency of moderation systemsā€ applied to Sora. In August, when addressing concerns about young ChatGPT users experiencing mental health crisis, OpenAI made the astonishing admission that the chatbots safety features can begin to fail after extended engagement.

ā€œWe have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade,ā€ the company said in a blog post. ā€œFor example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.ā€

Trending Stories

It’s unclear whether continuing to tweak certain prompts over and over could also cause Sora to stray from its safety protocols, but Ekō found evidence that some users may be attempting to circumvent protections this way. There are multiple videos, for example, of people touching or pulling on a woman who is stuck to a wall (or in a hole in a wall), with remixes of the scenario sometimes turning more sexually suggestive.

OpenAI continues to weather criticism that it rushes new products to market while deprioritizing safety, with Sora in particular cited as a risk in our charged political climate, since deepfakes can be used to push extremist agendas and misinformation. Nevertheless, the company continues to lead the generative AI industry and is currently weighing an IPO that could value it at up to $1 trillion. With that kind of momentum, it’s hard to imagine a scandal big enough to slow them down.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img