Those erroneous search results were just the AI doing its job, says Google—"Prior to these screenshots going viral, practically no one asked Google that question"

Typing on internet search toolbar: What am I doing?
(Image credit: Busà Photography via Getty Images)

Google's AI Overview search feature has had, I think it's fair to say, quite a rocky start. From recommending users drink urine (light in color), to suggesting the addition of glue to a pizza sauce, we've all had a laugh at the AI's expense. In between bouts of existential dread and genuine concern over the serious damage this sort of advice could wreak upon users looking for factual results, of course.

Now Google has responded, and it turns out that actually the whole thing was really a big success (via The Verge). Liz Reid, head of Google search, says that "user feedback shows that…people have higher satisfaction with their search results", and that really, these erroneous results are simply down to the poor AI responding to "nonsensical queries and satirical content".

You should all be very ashamed of yourselves for asking silly questions, I guess. Anyway, Reid gives the example of a much-mocked search query response, "How many rocks should I eat", which the AI responded to by referencing a satirical article from The Onion, advising that you should eat at least one small rock a day

This, Reid explains, is what is referred to as a "data void" or "information gap", where Google can only pull from a limited amount of high-quality content about a specific topic. Given that the only information the AI could reference was a satirical article that was also republished on a geological software provider's website, the AI Overview "faithfully linked to one of the only websites that tackled the question".

Since the launch, Google has apparently been busy working on updates that should prevent these sorts of results from appearing in future. These include better detection mechanisms that shouldn't show an AI overview for "nonsensical queries", limits on the usage of user-generated content in responses, and "additional triggering refinements to enhance our quality protections." 

There's also an interesting line in regards to news coverage: "We aim to not show AI Overviews for hard news topics, where freshness and factuality is important."

Which, to me, reads as "Our AI is really not very good at getting the facts right, so we're doing our best to keep it away from important topics where it might get us in trouble".

In which case, it really does call into question whether the AI Overview feature should be let loose on, well, any subject when it comes to providing factual information. Still, at least it's relatively easy to turn off, although why we have to manually disable a default function that is still—by the company's own admission— very much a work in progress and capable of delivering wildly inaccurate results is beyond me.

The post ends with the reassurance that "we’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback."

So well done for your beta testing participation, Internet. Google is learning from you, it seems, and I personally hope that none of you ate rocks, made pizza with glue, or drank your own urine in the process.


Best gaming PC: The top pre-built machines.
Best gaming laptop: Great devices for mobile gaming.

Andy Edser
Hardware Writer

Andy built his first gaming PC at the tender age of 12, when IDE cables were a thing and high resolution wasn't. After spending over 15 years in the production industry overseeing a variety of live and recorded projects, he started writing his own PC hardware blog for a year in the hope that people might send him things. Sometimes they did.

Now working as a hardware writer for PC Gamer, Andy can be found quietly muttering to himself and drawing diagrams with his hands in thin air. It's best to leave him to it.