At the end of 2024, OpenAI unveiled Sora, a text-to-video AI model that could generate moving images based on user prompts, ranging from stylistic animations to photorealistic “footage.” Although these snippets often included visual errors that clearly marked them as products of AI processing, the ramifications of such technology were felt far and wide: As early adopters proclaimed new creative opportunities, political commentators fretted over the potential for disinformation, while the debate in Hollywood about whether and how to leverage AI took on added urgency. (In March, OpenAI even held a Sora event at an L.A. movie theater to woo industry insiders, screening 11 short “films” made with the model.)
Almost a year after Sora’s debut, the AI boom (which many have argued is a bubble) continues apace, and OpenAI has now unveiled Sora 2. The AI firm describes the updated model’s outputs as “more physically accurate, realistic, and more controllable than prior systems,” with “synchronized dialogue and sound effects.” OpenAI CEO Sam Altman, meanwhile, called it “a tremendous research achievement,” and said that using it was “the most fun I’ve had with a new product in a long time.” For now, Sora 2 is only accessible by exclusive invite — an OpenAI spokesperson tells Rolling Stone that they have a waiting list and are “unable to provide a code at this time” — but all the hype and the tightly controlled release don’t mean the rollout has been entirely smooth sailing.
Disinformation and Extremist Content
An important distinction between Sora and Sora 2 is that the latter is now the basis for a new app — simply called “Sora” — that functions as a social media network. It’s essentially a version of TikTok with nothing but artificially generated content. As such, videos appear in a user’s feed, and can be liked and remixed by others on the platform. Last week, on the day Sora 2 officially launched, an OpenAI employee who works on the product claimed to have posted the first viral video there: a deepfake of security camera footage showing Altman shoplifting graphics processing units, or GPUs, hardware essential for the computing power to run AI systems such as Sora itself.
Editor’s picks
The implications were obvious. Not only did other people generate similar bogus footage of Altman post it as if it were authentic, but tech reporters at The Washington Post and elsewhere soon demonstrated that Sora 2 could depict real people dressed as Nazis, fabricate false archival footage of John F. Kennedy and Martin Luther King Jr. saying things they never really did, insert other users into historical events such as the Jan. 6 Capitol riot, and generate “ragebait” scenes of confrontations between individuals of different races. While plenty of the early videos were patently unrealistic — a segment in which the late rapper Tupac Shakur appears on Mister Rogers’ Neighborhood, for example, or a 1990s-era commerical for a toy version of Jeffrey Epstein‘s private island — it’s clear that the updated model can be abused to extremist ideological ends.
Copyright Infringement
Pikachu, Ronald McDonald, the kids of South Park, and Peter Griffin from Family Guy were among the many pieces of protected intellectual property to show up on the Sora app shortly after it launched. Copyright considerations aside, some of it was harmless, yet it doesn’t take a corporate lawyer to understand that images of SpongeBob SquarePants cooking meth or sporting a Hitler mustache are going to cause legal headaches down the line. “The only conclusion I can draw is OpenAI is trying to get sued,” quipped one early user on X, sharing screenshots of Sora videos featuring well-known cartoon characters.
Sure enough, just three days after the launch of Sora 2, OpenAI had to crack down on this legally hazardous content with a revised copyright policy. Whereas the company had first announced that any material was fair game unless rightsholders opted out of the platform — potentially a sneaky way of permitting the appropriation of almost any branded content — Altman announced in a blog post on Friday that they were switching to an “opt-in” arrangement that would give rightsholders “more granular control” over how their IP does or doesn’t appear on Sora. The CEO noted that some “edge cases” might get through the added guardrails, though users did start receiving error notices on prompts that indicated a possible “similarity to third-party content.”
Related Content
Mounting Energy Usage
Altman’s Friday update also acknowledged that Sora users “are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.” An explosion of video generation presents a significant strain on OpenAI’s data servers. By one estimate from researchers writing in MIT Technology Review earlier this year, even a short, non-high-definition video clip may require more than 700 times the energy it takes to produce a high-quality still image.
Right before Sora 2 became available at the end of last month, Forbes reported on the massive scale of OpenAI’s burgeoning energy needs, revealing that their forthcoming round of new data centers will consume roughly the amount of electricity used by New York City and San Diego combined (a figure that tops 15 gigawatts total during peak summer heat). The company, which is planning to spend at least $1 trillion on building data center infrastructure through deals with tech partners including Oracle, SoftBank, and Nvidia, has said it will need more than 20 gigawatts of energy to meet growing demand — the equivalent of that produced by 20 nuclear power plants. On Monday, it confirmed yet another deal, this one with chipmaker Advanced Micro Devices, saying it would deploy 6 gigawatts’ worth of their GPUs and was eyeing a 10 percent stake in the company.
This incredible energy usage has already put significant strain on the U.S. electric grid and led AI companies to start working on supplying their own power. The full environmental impact of the AI “gold rush” and the ever-increasing need for electricity remains murky, but there’s no question that the technology adds to carbon emissions. It also demands a tremendous amount of water to cool data center hardware, which can disrupt local ecosystems and municipal water systems.
Rivals Aren’t Giving Up Yet
Of course, OpenAI isn’t the only tech giant in the AI race. This year, Google launched their own video generator, Veo 3. Midjourney has a relatively new video feature as well. And then there’s Grok Imagine, the video model from Elon Musk’s xAI.
Trending Stories
Following the splashy release of Sora 2, Musk — a founder of OpenAI who left the board in 2018 and has since become a fierce critic of the company and Altman — appeared particularly eager to plug his competitor product. “Grok Imagine is improving super fast,” he posted on X, his social media platform, on Sunday, elsewhere touting a “major update” to the app. Yet the AI creations the world’s richest man has recently shared are indistinguishable from the content he’s been plugging all along: moving images of anime-style female characters, usually in skimpy, form-fitting outfits and sci-fi settings.
For now, it seems, Sora 2 has the edge on other sophisticated models, though that could easily change in the future. There’s also no guarantee that OpenAI’s vision of a social app that’s all AI-generated video will have any staying power. As they and their rivals burn through unthinkable sums of money in hopes of turning a profit someday, the fate of this industry will ultimately hinge on whether it can achieve something beyond mere fleeting novelty.