Google is using AI to design AI processors much faster than humans can
Chips making chips.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Every Friday
GamesRadar+
Your weekly update on everything you could ever want to know about the games you already love, games we know you're going to love in the near future, and tales from the communities that surround them.
Every Thursday
GTA 6 O'clock
Our special GTA 6 newsletter, with breaking news, insider info, and rumor analysis from the award-winning GTA 6 O'clock experts.
Every Friday
Knowledge
From the creators of Edge: A weekly videogame industry newsletter with analysis from expert writers, guidance from professionals, and insight into what's on the horizon.
Every Thursday
The Setup
Hardware nerds unite, sign up to our free tech newsletter for a weekly digest of the hottest new tech, the latest gadgets on the test bench, and much more.
Every Wednesday
Switch 2 Spotlight
Sign up to our new Switch 2 newsletter, where we bring you the latest talking points on Nintendo's new console each week, bring you up to date on the news, and recommend what games to play.
Every Saturday
The Watchlist
Subscribe for a weekly digest of the movie and TV news that matters, direct to your inbox. From first-look trailers, interviews, reviews and explainers, we've got you covered.
Once a month
SFX
Get sneak previews, exclusive competitions and details of special events each month!
To one extent or another artificial intelligence is practically everywhere these days, from games to image upscaling to smartphone "personal assistants." More than ever, researchers are pouring a ton of time, money, and effort into AI designs. At Google, AI algorithms are even being used to design AI chips.
This is not a complete design of silicon that Google is dealing with, but a subset of chip design known as placement optimization. This is a time-consuming task for humans. As explained by IEEE Spectrum (via LinusTechTips), this involves placing blocks of logic and memory (or clusters of those blocks) in strategic areas to make the most of the available real estate, both for performance and power efficiency.
It might take a team of engineers several weeks to map out the ideal placement because it's a complex task with a ton of variables. In stark contrast, Google's neural network can produce a better design for a Tensor processing unit in less than 24 hours. This is similar in concept to the Tensor cores that Nvidia uses in its Turing-based GeForce RTX graphics cards, just with different goals in mind.
That's interesting in and of itself, but equally so is the type of AI Google is using. Rather than leverage a deep learning model, which requires training AI on a large set of data, Google is using a "reinforcement learning" system. The short explanation is RL models learn by doing.
There is a reward system involved, so RL models proceed in the right direction. In this case, the reward is a combination of power reduction, improvements in performance, and area reduction. I'm simplifying a bit, but basically, the more designs Google's AI does, the better it becomes at the task at hand (making AI chips).
"We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other," Google's researchers explain. If this works out for Google, it seems inevitable AMD, Intel and Nvidia will eventually try the same approach, too.
You check out the technical details in a paper posted to Arxiv.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Thanks, LinusTechTips.
Paul has been playing PC games and raking his knuckles on computer hardware since the Commodore 64. He does not have any tattoos, but thinks it would be cool to get one that reads LOAD"*",8,1. In his off time, he rides motorcycles and wrestles alligators (only one of those is true).


