'Godfather of Deep Learning' quits Google and warns of AI dangers: 'I don’t think they should scale this up more until they have understood whether they can control it'

Geoffrey Hinton
(Image credit: University of Toronto)

Geoffrey Hinton, known colloquially as the "Godfather of Deep Learning," spent the past decade working on artificial intelligence development at Google. But in an interview with The New York Times, Hinton announced that he has resigned from his position, and said he's worried about the rate of AI development and its potential for harm.

Hinton is one of the foremost researchers in the field of AI development. The Royal Society, to which he was elected as a Fellow in 1998, describes him as "distinguished for his work on artificial neural nets, especially how they can be designed to learn without the aid of a human teacher," and said that his work "may well be the start of autonomous intelligent brain-like machines."

In 2012, he and students Alex Krizhevsky and Ilya Sutskever developed a system called AlexNet, a "convolutional neural network" able to recognize and identify objects in images with far greater accuracy than any preceding system. Shortly after using AlexNet to win the 2012 ImageNet challenge, they launched a startup company called DNN Research, which Google quickly snapped up for $44 million.

Hinton continued his AI work on a part-time basis at Google—he's also a professor at the University of Toronto—and to lead advancements in the field: In 2018, for instance, he was a co-winner of the Turing Award for "major breakthroughs in artificial intelligence." 

"He was one of the researchers who introduced the back-propagation algorithm and the first to use backpropagation for learning word embeddings," his presumably soon-to-be-deleted Google employee page says. "His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His research group in Toronto made major breakthroughs in deep learning that have revolutionized speech recognition and object classification."

(Image credit: Google)

More recently, though, he's apparently had a dramatic change of heart about the nature of his work. Part of Hinton's new concern arises from the "scary" rate at which AI development is moving forward. "The idea that this stuff could actually get smarter than people—a few people believed that," Hinton said. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

That's happening at least in part a result of competing corporate interests, as Microsoft and Google race to develop more advanced AI systems. It's unclear what can be done about it: Hinton said he believes that race to the top can only be managed through some form of global regulation, but that may be impossible because there's no way to know what companies are working on behind closed doors. Thus, he thinks it falls to the science community to take action.

"I don’t think they should scale this up more until they have understood whether they can control it," he said.

But even if scientists elect to take a slower and more deliberate approach to AI (which I think is unlikely), the inevitable outcome of continued development obviously worries Hinton too: "It is hard to see how you can prevent the bad actors from using it for bad things," he said.

Hinton's latest comments stand in interesting contrast to a 2016 interview with Maclean's, in which he expressed a need for caution but said that it shouldn't be used to hinder the development of AI in the future. 

"It’s a bit like… as soon as you have good mechanical technology, you can make things like backhoes that can dig holes in the road. But of course a backhoe can knock your head off," Hinton said. "But you don’t want to not develop a backhoe because it can knock your head off, that would be regarded as silly.

"Any new technology, if it’s used by evil people, bad things can happen. But that’s more a question of the politics of the technology. I think we should think of AI as the intellectual equivalent of a backhoe. It will be much better than us at a lot of things. And it can be incredibly good—backhoes can save us a lot of digging. But of course, you can misuse it."

People should be thinking about the impact that AI will have on humanity, he said, but added, "the main thing shouldn’t be, how do we cripple this technology so it can’t be harmful, it should be, how do we improve our political system so people can’t use it for bad purposes?"

Hinton made similar statements in a 2016 interview with TVO, in which he acknowledged the potential for problems but said he expected them to be much further down the road than they're actually proving to be.

YouTube YouTube
Watch On

Interestingly, Hinton was not one of the signatories to recent open letters calling for a six-month "pause" on the development of new AI systems. According to the Times, he didn't want to publicly criticize Google or other companies until after he had resigned. Hinton clarified on Twitter that he did not leave Google so he could speak out about the company, however, but so that he could "talk about the dangers of AI without considering how this impacts Google."

"Google has acted very responsibly," he added.

Be that as it may, it's a very big deal that one of the foremost minds in AI development is now warning that it could all be very bad for us one day. Hinton's new outlook has obvious parallels to Oppenheimer's regret about his role in developing nuclear weapons. Of course, Oppenheimer's second thoughts came after the development and use of the atomic bomb, when it was easy to see just how dramatically the world had changed. It remains to be seen whether Hinton's regrets also come after the horse has bolted, or if there's still time (and sufficiently regulatory capability in global governments) to avoid the worst.



Andy Chalk

Andy has been gaming on PCs from the very beginning, starting as a youngster with text adventures and primitive action games on a cassette-based TRS80. From there he graduated to the glory days of Sierra Online adventures and Microprose sims, ran a local BBS, learned how to build PCs, and developed a longstanding love of RPGs, immersive sims, and shooters. He began writing videogame news in 2007 for The Escapist and somehow managed to avoid getting fired until 2014, when he joined the storied ranks of PC Gamer. He covers all aspects of the industry, from new game announcements and patch notes to legal disputes, Twitch beefs, esports, and Henry Cavill. Lots of Henry Cavill.