HomeCultureWill AI start nuclear war? What Netflix movie A House of Dynamite...

Will AI start nuclear war? What Netflix movie A House of Dynamite misses.


For as long as AI has existed, humans have had fears around AI and nuclear weapons. And movies are a great example of those fears. Skynet from the Terminator franchise becomes sentient and fires nuclear missiles at America. WOPR from WarGames nearly starts a nuclear war because of a miscommunication. Kathryn Bigelow’s recent release, House of Dynamite, asks if AI is involved in a nuclear missile strike headed for Chicago.

AI is already in our nuclear enterprise, Vox’s Josh Keating tells Today, Explained co-host Noel King. “Computers have been part of this from the beginning,” he says. “Some of the first digital computers ever developed were used during the building of the atomic bomb in the Manhattan Project.” But we don’t know exactly where or how it’s involved.

So do we need to worry? Well, maybe, Keating argues. But not about AI turning on us.

Below is an excerpt of their conversation, edited for length and clarity. There’s much more in the full episode, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

There’s a part in A House of Dynamite where they’re trying to figure out what happened and whether AI is involved. Are these movies with these fears onto something?

The interesting thing about movies, when it comes to nuclear war, is: This is a kind of war that’s never been fought. There are no sort of veterans of nuclear wars other than the two bombs we dropped on Japan, which is a very different scenario. I think that movies have always played a kind of outsize role in debates over nuclear weapons. You can go back to the ’60s when the Strategic Air Command actually produced its own rebuttal to Dr. Strangelove and Fail Safe. In the ’80s, that TV movie The Day After was kind of a galvanizing force for the nuclear freeze movement. President [Ronald] Reagan apparently was very disturbed when he watched it, and it influenced his thinking on arms control with the Soviet Union.

In the specific topic I’m looking at, which is AI and nuclear weapons, there’s been a surprising number of movies that have that as the plot. And it comes up a lot in the policy debates over this. I’ve had people who are advocates for integrating AI into the nuclear command system saying, “Look, this isn’t going to be Skynet.” General Anthony Cotton, who’s the current commander of Strategic Command — which is the branch of the military responsible for the nuclear weapons— advocates for greater use of AI tools. He referred to the 1983 movie WarGames, saying, “We’re going to have more AI, but there’s not going to be a WOPR in strategic command.”

Where I think [the movies] fall a little short is the fear tends to be that a super intelligent AI is going to take over our nuclear weapons and use it to wipe us out. For now, that’s a theoretical concern. What I think is the more real concern is that as AI gets into more and more parts of the command and control system, do the human beings in charge of the decisions to make nuclear weapons really understand how the AIs are working? And how is it going to affect the way they make these decisions, which could be — not exaggerating to say — some of the most important decisions ever made in human history.

Do the human beings working on nukes understand the AI?

We don’t know exactly where AI is in the nuclear enterprise. But people will be surprised to know how low-tech the nuclear command and control system really was. Up until 2019, they were using floppy discs for their communication systems. I’m not even talking about the little plastic ones that look like your save icon on Windows. I mean, the old ’80s bendy ones. They want these systems to be secure from outside cyber interference, so they don’t want everything hooked up to the cloud.

But as there’s this ongoing multibillion-dollar nuclear modernization process underway, a big part of that is updating these systems. And multiple commanders of StratCom, including a couple I talked to, said they think AI should be part of this. What they all say is that AI should not be in charge of making the decision as to whether we launch nuclear weapons. They think that AI can just analyze massive amounts of information and do it much faster than people can. And if you’ve seen A House of Dynamite, one thing that movie shows really well is how quickly the president and senior advisers are going to have to make some absolutely extraordinary, difficult decisions.

What are the big arguments against getting AI and nukes in bed together?

Even the best AI models that we have available today are still prone to error. Another worry is that there could be outside interference with these systems. It could be hacking or a cyberattack, or foreign governments could come up with ways to sort of seed inaccurate information into the model. There has been reporting that Russian propaganda networks are actively trying to seed disinformation into the training data used by Western consumer AI chatbots. And another is just how people interact with these systems. There is a phenomenon that a lot of researchers pointed out called automation bias, which is just that people tend to trust the information that computer systems are giving them.

There are abundant examples from history of times when technology has actually led to near nuclear disasters, and it’s been humans who’ve stepped in to prevent escalation. There was a case in 1979 when Zbigniew Brzezinski, the US national security adviser, was actually woken up by a phone call in the middle of the night informing him that hundreds of missiles had just been launched from Soviet submarines off the coast of Oregon. And just before he was about to call President Jimmy Carter to tell him America was under attack, there was another call that [the first] had been a false alarm. A few years later, there was a very famous case in the Soviet Union. Colonel Stanislav Petrov, who was working in their missile detection infrastructure, was informed by the computer system that there had been a US nuclear launch. Under the protocols, he was supposed to then inform his superiors, who might’ve ordered immediate retaliation. But it turned out the system had misinterpreted sunlight reflecting off clouds as a missile launch. So it’s very good that Petrov made the decision to wait a few minutes before he called his superiors.

I’m listening through to those examples, and the thing I might take away if I’m thinking about it really simplistically is that human beings pull us back from the brink when technology screws up.

It’s true. And I think there’s some really interesting recent tests on AI models given sort of military crisis scenarios, and they actually tend to be more hawkish than human decision makers are. We don’t know exactly why that is. If we look at why we haven’t fought a nuclear war — why, 80 years after Hiroshima, nobody’s dropped another atomic bomb, why there’s never been a nuclear exchange on the battlefield — I think part of it’s just how terrifying it is. How humans understand the destructive potential of these weapons and what this escalation can lead to. That there are certain steps that may have unintended consequences and fear is a big part of it.

From my perspective, I think we want to make sure that there’s fear built into the system. That entities that are capable of being absolutely freaked out by the destructive potential of nuclear weapons are the ones who are making the key decisions on whether to use them.

It does sound like watching A House of Dynamite, you can vividly think that perhaps we should get all of the AI out of this entirely. It sounds like what you’re saying is: AI is a part of nuclear infrastructure for us, for other nations, and it is likely to stay that way.

One thing one advocate for more automation told me was, “if you don’t think humans can build a trustworthy AI, then humans have no business with nuclear weapons.” But the thing is, I think that’s a statement that people who think we should eliminate all nuclear weapons entirely would also agree with.
I may have gotten into this worried that AI was going to take over and take over nuclear weapons, but I realized right now I’m worried enough about what people are going to do with nuclear weapons. It’s not that AI is going to kill people with nuclear weapons. It’s that AI might make it more likely that people kill each other with nuclear weapons. To a degree, the AI is the least of our worries. I think the movie shows well just how absurd the scenario in which we’d have to decide whether or not to use them really is.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img