'I don't know what it means to have a Manhattan Project for AI': Nuclear war experts remind us of the frightening risks of our artificial intelligence controlling our nukes

Civilians looking out an explosion in the distance
(Image credit: Bethesda)

I'm not old enough to have lived through the Cold War and its close nuclear shaves that apparently had us at the brink of armageddon. But even millennials like myself, and younger generations, have grown up under the shadow of that particular mushroom-shaped threat. And it might just be me, but I don't find my fear of this soothed by our continuous AI advancement.

That's especially the case when I reflect on the possibility of AI being baked into parts of nuclear launch systems, which, as Wired reports, nuclear war experts think might be inevitable. Ex-US Air Force major general and member of the Science and Security Board for the Bulletin of the Atomic Scientists Bob Latiff, for instance, thinks that AI is "like electricity" and is "going to find its way into everything."

The various experts—scientists, military personnel, and so on—spoke to Nobel laureates last month at the University of Chicago. And while it seems there might have been an air of determinism about the whole 'AI coming to nukes' thing—one that's supported by the military seemingly leaning into AI adoption—that doesn't mean everyone was keen on this future.

In fact, judging from what Wired relays, the experts were quick to point out all the risks that might occur from this Manhattan project.

The first thing to understand is that launching a nuke doesn't occur at the final key-turn. That key-turn is a result of what Wired explains are "a hundred little decisions, all of them made by humans." And it's that last part that's key, when considering AI. Which of these little decisions could and should AI be allowed to exercise agency over?

Thankfully the bigwigs seem to agree that we need human agency over actual nuclear weapon decisions, but even if AI doesn't have actual agency over decisions in the process, are there problems with relying on its information or suggestions?

Director of global risk at the Federation of American Scientists Jon Wolfsthal explains his concerns: "What I worry about is that somebody will say we need to automate this system and parts of it, and that will create vulnerabilities that an adversary can exploit, or that it will produce data or recommendations that people aren't equipped to understand, and that will lead to bad decisions."

I've already spoken about what I see as utopian AI fanaticism in the new tech elites, and we're certainly seeing the US lean heavily into the AI arms race—thus the US energy secretary calling it the second Manhattan project, not to mention this being the Energy Department's official stance. So it's not exactly a far-fetched idea that AI could start to be used to automate parts of the system to, for instance, produce data or recommendations from the black box of an artificial intelligence.

This problem is also surely exacerbated by a general lack of understanding of AI, and perhaps a misplaced faith in it. Wolfsthal agrees on the first point: "The conversation about AI and nukes is hampered by a couple of major problems. The first is that nobody really knows what AI is."

If we misunderstand AI as something that is inherently truth-aiming then we are liable to be unthinkingly misguided by its data or recommendations. AI isn't inherently truth-aiming, humans are. We can try to guide AI in the direction of what we consider to be truthful, but that's coming from us, not the AI.

If we start to feed AI into parts of processes that quite literally hold the keys to the fate of humanity, these are the kinds of things that we need to be remembering. It's good news, at least, that these conversations are taking place between nuclear war and nuclear proliferation experts and those who actually have a hand in how we tackle the problem in the future.

Razer Blade 16 gaming laptop
Best gaming rigs 2025

👉Check out our list of guides👈

1. Best gaming laptop: Razer Blade 16

2. Best gaming PC: HP Omen 35L

3. Best handheld gaming PC: Lenovo Legion Go S SteamOS ed.

4. Best mini PC: Minisforum AtomMan G7 PT

5. Best VR headset: Meta Quest 3

TOPICS
Jacob Fox
Hardware Writer

Jacob got his hands on a gaming PC for the first time when he was about 12 years old. He swiftly realised the local PC repair store had ripped him off with his build and vowed never to let another soul build his rig again. With this vow, Jacob the hardware junkie was born. Since then, Jacob's led a double-life as part-hardware geek, part-philosophy nerd, first working as a Hardware Writer for PCGamesN in 2020, then working towards a PhD in Philosophy for a few years while freelancing on the side for sites such as TechRadar, Pocket-lint, and yours truly, PC Gamer. Eventually, he gave up the ruthless mercenary life to join the world's #1 PC Gaming site full-time. It's definitely not an ego thing, he assures us.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.