UK government has been using AI for all sorts and it's going as well as you might expect

Abstract image with a wireframe humanoid face on a digital art background
(Image credit: Getty Images)

The UK government is using deep learning algorithms, under the catch-all umbrella of AI, to help its various sections make decisions in welfare benefit claims, ascertaining cases of fraud, and even the scanning of passports. That's all probably of no surprise whatsoever but as one investigation suggests, it's opening a massive can of worms for all concerned.

If you're wondering what kind of AI is being talked about here, then think about upscaling. The systems employed by the government aren't too dissimilar from those developed by Nvidia for its DLSS Super Resolution technology.

The data model for that is trained by feeding it millions of very high resolution frames, from hundreds of games. So when the algorithm is then fed a low resolution image, it can work out how the frame is most likely to appear once it's been upscaled.

DLSS upscaling uses a fairly standard routine to make the jump from 1080p to 4K, for example. It then runs the AI algorithm to correct any errors in the image. But like all such systems, the quality of the end result depends massively on what you feed into the algorithm and what the dataset was trained on.

An investigation by the Guardian into the use of AI by the UK government highlights what happens when there are problems with both of those aspects. For example, the publication reports that the Home Office was using AI to read passports at airports, to help flag up potential fake marriages for further investigation.

The Guardian says an internal Home Office evaluation shows the algorithm is highlighting a disproportionate number of people from Albania, Greece, Romania, and Bulgaria. If the dataset was trained on data that itself is already over-emphasising particular traits in the survey, then the AI will be just as biased in its calculations.

Open Ai - ChatGPT - Dictionary Photo Illustration

(Image credit: Jonathan Raa/NurPhoto via Getty Images)

News reports of government organisations getting things seriously wrong because of the over-reliance on AI aren't rare. The hype surrounding the potential of artificial intelligence has led to the likes of ChatGPT being treated as being one of the most important inventions right now, and yet it can easily produce some highly questionable and shocking results.

The UK government naturally defends the use of AI and in the case of welfare benefit claims, says that the final decision is made by a person. But does that person base their decision on the algorithm's output or do they go back and check everything again? If it's the latter, the use of AI has been a total waste of time and money.

But if it's the former, and the AI has been trained on information that's already biased, then that final, ultimate decision made by a real living human being will be biased too. Even seemingly innocent use scenarios are affected by this, such as identifying which people are more at risk if a pandemic occurs, as the wrong people could be selected or those most in need are skipped entirely.

Screen queens

(Image credit: Future)

Best gaming monitor: Pixel-perfect panels for your PC.
Best high refresh rate monitor: Screaming quick.
Best 4K monitor for gaming: When only high-res will do.
Best 4K TV for gaming: Big-screen 4K gaming.

Such is the potential for deep learning to be used in all things, for good and bad, that no government is going to turn its back on it now. What's needed is greater transparency behind the algorithms used, along with allowing experts access to the code and dataset to ensure that the systems are used fairly and appropriately.

In the UK, such a move has already taken place, but when you are simply 'encouraged to complete an algorithmic transparency report for every algorithmic tool', there's not much incentive or legal pressure for any organisation to do so.

This may change in time but until then, I'd like to see a widespread training program for all government employees who are using AI in their roles. Not on how to use it, but on understanding its limitations, so that people are in a better position to question an algorithm's output.

We're all biased, one way or another, but we have to remember, so is AI.

Nick Evanson
Hardware Writer

Nick, gaming, and computers all first met in 1981, with the love affair starting on a Sinclair ZX81 in kit form and a book on ZX Basic. He ended up becoming a physics and IT teacher, but by the late 1990s decided it was time to cut his teeth writing for a long defunct UK tech site. He went on to do the same at Madonion, helping to write the help files for 3DMark and PCMark. After a short stint working at Beyond3D.com, Nick joined Futuremark (MadOnion rebranded) full-time, as editor-in-chief for its gaming and hardware section, YouGamers. After the site shutdown, he became an engineering and computing lecturer for many years, but missed the writing bug. Cue four years at TechSpot.com and over 100 long articles on anything and everything. He freely admits to being far too obsessed with GPUs and open world grindy RPGs, but who isn't these days?