Hundreds of public figures, including Nobel Prize-winning scientists, former military leaders, artists and British royalty, signed a statement Wednesday calling for a ban on work that could lead to computer superintelligence, a yet-to-be-reached stage of artificial intelligence that they said could one day pose a threat to humanity.
The statement proposes “a prohibition on the development of superintelligence” until there is both “broad scientific consensus that it will be done safely and controllably” and “strong public buy-in.”
Organized by AI researchers concerned about the fast pace of technological advances, the statement had more than 800 signatures from a diverse group of people. The signers include Nobel laureate and AI researcher Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, rapper Will.i.am, former Trump White House aide Steve Bannon and U.K. Prince Harry and his wife, Meghan Markle.
The statement adds to a growing list of calls for an AI slowdown at a time when AI is threatening to remake large swaths of the economy and culture. OpenAI, Google, Meta and other tech companies are pouring billions of dollars into new AI models and the data centers that power them, while businesses of all kinds are looking for ways to add AI features to a broad range of products and services.
Some AI researchers believe AI systems are advancing fast enough that soon they’ll demonstrate what’s known as artificial general intelligence, or the ability to perform intellectual tasks as a human could. From there, researchers and tech executives believe what could follow might be superintelligence, in which AI models perform better than even the most expert humans.
The statement is a product of the Future of Life Institute, a nonprofit group that works on large-scale risks such as nuclear weapons, biotechnology and AI. Among its early backers in 2015 was tech billionaire Elon Musk, who’s now part of the AI race with his startup xAI. Now, the institute says, its biggest recent donor is Vitalik Buterin, a co-founder of the Ethereum blockchain, and it says it doesn’t accept donations from big tech companies or from companies seeking to build artificial general intelligence.
Its executive director, Anthony Aguirre, a physicist at the University of California, Santa Cruz, said AI developments are happening faster than the public can understand what’s happening or what’s next.
“We’ve, at some level, had this path chosen for us by the AI companies and founders and the economic system that’s driving them, but no one’s really asked almost anybody else, ‘Is this what we want?’” he said in an interview.
“It’s been quite surprising to me that there has been less outright discussion of ‘Do we want these things? Do we want human-replacing AI systems?’” he said. “It’s kind of taken as: Well, this is where it’s going, so buckle up, and we’ll just have to deal with the consequences. But I don’t think that’s how it actually is. We have many choices as to how we develop technologies, including this one.”
The statement isn’t aimed at any one organization or government in particular. Aguirre said he hopes to force a conversation that includes not only major AI companies, but also politicians in the United States, China and elsewhere. He said the Trump administration’s pro-industry views on AI need balance.
“This is not what the public wants. They don’t want to be in a race for this,” he said. He said there might eventually need to be an international treaty on advanced AI, as there is for other potentially dangerous technologies.
The White House didn’t immediately respond to a request for comment on the statement Tuesday, ahead of its official release.
Americans are almost evenly split over the potential impact of AI, according to an NBC News Decision Desk Poll powered by SurveyMonkey this year. While 44% of U.S. adults surveyed said they thought AI would make their and their families’ lives better, 42% said they thought it would make their futures worse.
Top tech executives, who have offered predictions about superintelligence and signaled that they are working toward it as a goal, didn’t sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was “now in sight.” Musk posted on X in February that the advent of digital superintelligence “is happening in real-time” and has earlier warned about “robots going down the street killing people,” though now Tesla, where Musk is CEO, is working to develop humanoid robots. OpenAI CEO Sam Altman said last month that he’d be surprised if superintelligence didn’t arrive by 2030 and wrote in a January blog post that his company was turning its attention there.
Several tech companies didn’t immediately respond to requests for comment on the statement.
Last week, the Future of Life Institute told NBC News that OpenAI had issued subpoenas to it and its president as a form of retaliation for calling for AI oversight. OpenAI Chief Strategy Officer Jason Kwon wrote on Oct. 11 that the subpoena was a result of OpenAI’s suspicions around the funding sources of several nonprofit groups that had been critical of its restructuring.
Other signers of the statement include Apple co-founder Steve Wozniak, Virgin Group co-founder Richard Branson, conservative talk show host Glenn Beck, former U.S. national security adviser Susan Rice, Nobel-winning physicist John Mather, Turing Award winner and AI researcher Yoshua Bengio and the Rev. Paolo Benanti, a Vatican AI adviser. Several AI researchers based in China also signed the statement.
Aguirre said the goal was to have a broad set of signers from across society.
“We want this to be social permission for people to talk about it, but also we want to very much represent that this is not a niche issue of some nerds in Silicon Valley, who are often the only people at the table. This is an issue for all of humanity,” he said.