AI experts are calling for safety calculations akin to Compton's A-bomb tests before releasing Artificial Super Intelligences upon humanity

An Ai face looks down on a human.
(Image credit: Colin Anderson via Getty Images)

AI is an initialism I'm hearing multiple times a day lately, and usually only with a 30% hit rate of being used for the right actual thing. LLMs, such as ChatGBT and DeepSeek, are constantly in the news while we talk about putting AI in everything from our gaming chips to our schools. It's easy to dismiss this as a pop-culture phase, just like uranium fever gripped the glove in the past with nuclear anxiety.

The comparison between launching an A-bomb and an AI algorithm might seem hyperbolic, but the Guardian has reported AI experts are calling for a safety test akin to what was put in place for the Trinity test for the first detonation of a nuclear weapon.

Max Tegmark, a professor of physics and AI researcher at MIT along with three of his students, have published a paper recommending a similar approach. In this paper they call for a required calculation of whether or not any significantly advanced AI might slip out of humans control. This test is being compared to those carried out by Arthur Compton in ascertaining the likelihood of a nuclear bomb detonating in atmosphere before Trinity was allowed to take place.

In those tests, Compton approved the go ahead of Trinity after declaring the likelihood of such an explosion to be slightly less than one in three million. Tegmark when carrying out similar calculations, has found it to be 90% likely that a highly advanced AI could pose its own threat to humanity, as opposed to just Windows bugs. This level of currently theoretical AI has been dubbed an Artificial Super Intelligence or ASI.

The calculations have left Tegmark convinced that safety implementations are needed, and that companies have a responsibility to be checking for these potential threats. He also believes a standardised approach agreed to and calculated by multiple companies is required to create the political pressure for companies to comply.

"The companies building super-intelligence need to also calculate the Compton constant, the probability that we will lose control over it," he said. "It’s not enough to say ‘we feel good about it’. They have to calculate the percentage."

This isn't Tegmark's first push for more regulations and thought to go into making new AIs. He's also a co-founder at a non-profit towards the development of safe AI called the Future of Life Institute. The institute published an open letter in 2023 that called for a pause on developing powerful AIs that gained the attention and signature of folks like Elon Musk and Steve Wozniak.

Tegmark also worked with world-leading computer scientist Yoshua Bengio, as well as researchers at Open AI, Google, and DeepMind on The Singapore Consensus on Global AI Safety Research Priorities report. It seems if we ever do release an ASI onto the world, we'll at least know the exact percentage chance it has of ending us all.

Best gaming monitorBest high refresh rate monitorBest 4K monitor for gamingBest 4K TV for gaming


Best gaming monitor: Pixel-perfect panels.
Best high refresh rate monitor: Screaming quick.
Best 4K monitor for gaming: High-res only.
Best 4K TV for gaming: Big-screen 4K PC gaming.

Hope Corrigan
Hardware Writer

Hope’s been writing about games for about a decade, starting out way back when on the Australian Nintendo fan site Vooks.net. Since then, she’s talked far too much about games and tech for publications such as Techlife, Byteside, IGN, and GameSpot. Of course there’s also here at PC Gamer, where she gets to indulge her inner hardware nerd with news and reviews. You can usually find Hope fawning over some art, tech, or likely a wonderful combination of them both and where relevant she’ll share them with you here. When she’s not writing about the amazing creations of others, she’s working on what she hopes will one day be her own. You can find her fictional chill out ambient far future sci-fi radio show/album/listening experience podcast right here. No, she’s not kidding. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.