Tech luminaries including Musk and Wozniak beg AI pioneers to hit the brakes

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

Elon Musk, Apple co-founder Steve Wozniak and the CEO of Stability AI are among the leading figures in the broader tech industry who have signed an open letter calling for a six month "pause" on the development of advanced artificial intelligence systems including ChatGPT.

The letter was published by the Future of Life Institute, a non-profit organisation, and has more than 1,100 signatories from across the worlds of academia and technology at the time of writing.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," the letter says. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

Consequently, the letter argues that the industry should pause for thought before developing anything more powerful than GPT-4. If that doesn't happen voluntarily, governments should step in.

"We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium."

Among leading signatories are Emad Mostaque, the founder and CEO of Stability AI, the outfit behind the Stable Diffusion text-to-image generation model, Evan Sharp, a co-founder of Pinterest, Chris Larson, a co-founder of cryptocurrency company Ripple, deep learning Yoshua Bengio and Connor Leahy, the CEO of AI lab Conjecture.

GPT app

Fancy an app entirely created by an AI? They already exist. (Image credit: Apple)

Of course, cynics might suggest that many of the signatories could just want some time to catch up with the competition. But plenty more have nothing obvious to immediately gain.

Moreover, it's undeniable that the last few months has seen explosive developments when it comes to large GPT-style AI models. Just taking Open AI's GPT models, the first GPT-3 model as seen in ChatGPT was limited to text input and output. GPT-4, however, is multi-modal, supporting text, audio, video and images.

New plugins giving the models "eyes" and "ears" quickly developed, allowing AI models to send emails, execute code and do things in the real world like book flights through internet access.

GPT-4's reasoning abilities are also a major step up on ChatGPT and GPT-3. For instance, ChatGPT scores in the bottom 10% in US law examinations while GPT-4 scores in the top 90%.

GPT-4 has now been used to create Google Chrome extensions and iPhone apps, the latter from scratch and now available on the official app store. GPT-4 has also successfully coded a basic 3D game engine akin to the original Doom. People have even given GPT models the task of creating investment strategies and then implemented them.

Your next upgrade

(Image credit: Future)

Best CPU for gaming: The top chips from Intel and AMD
Best gaming motherboard: The right boards
Best graphics card: Your perfect pixel-pusher awaits
Best SSD for gaming: Get into the game ahead of the rest

It's not hard, therefore, to see how these models can very rapidly have a major impact on the real world even before you even think to raise the question of what happens if they become sentient. On that note, in a paper written by the creators of GPT-4, concerns that the model itself could develop and pursue undesirable or hazardous ends are raised. 

"Some that are particularly concerning are the ability to create and act on long-term plans,[62] to accrue power and resources (“powerseeking”), and to exhibit behavior that is increasingly “agentic.”[64] Agentic in this context does not intend to humanize language models or refer to sentience but rather refers to systems characterized by ability to, e.g., accomplish goals which may not have been concretely specified and which have not appeared in training; focus on achieving specific, quantifiable objectives; and do long-term planning. Some evidence already exists of such emergent behavior in models," the paper says.

Meanwhile, Microsoft says that its latest GPT-4 model is indeed showing "sparks" of artificial general intelligence. Anywho, the main issue here is surely that these models are being unleashed at scale on the public in everything from Bing searches to email and office tools.

It might all be fine. But, equally, the scope for unintended consequences currently looks to be almost infinite. And that's a little scary, with the usual provisos that we welcome whatever new overlords may emerge. You know, just in case.

Jeremy Laird
Hardware writer

Jeremy has been writing about technology and PCs since the 90nm Netburst era (Google it!) and enjoys nothing more than a serious dissertation on the finer points of monitor input lag and overshoot followed by a forensic examination of advanced lithography. Or maybe he just likes machines that go “ping!” He also has a thing for tennis and cars.