UK and EU governments are throwing themselves on the proverbial tram track that is AI ethical standards. The European Commission (EC) has already drafted some laws to regulate the use of AI, but reports suggest it'll take up to a year to actually get them in place.
Right now, we're amid the crossfire in the AI badlands. The law is seemingly being pushed aside while new AI applications are established all over, wholly unregulated.
According to Reuters (via AI News), two lawmakers involved with the EU's proceedings said the debate is tied up on whether facial recognition should be banned, and over who has the right to preside over the rules, and keep the AI in check.
It's a similar situation in the US where there is still no federal regulation of artifical intelligency, but there is reportedly some US AI regulation "on the horizon." It will apparently take a different form, however, where the detailed framework the EC has proposed is exchanged for an agency-by-agency approach.
The previous draft from the European Commission established some classifications for AI, depending on the level of risk that each system might pose to us as a species. These range from 'limited risk systems' such as chatbots and spam filters, right up to those of 'unacceptable risk'—i.e. anything exploitative, manipulative, or that might "conduct real-time biometric authentication in public spaces for law enforcement."
That all sounds very Orwellian, but when we've got DeepMind training AIs to control nuclear fusion, you'd think facial recognition would be the least of our worries.
'High risk' AI systems will be required to undergo heavy vetting, and be on some tight reigns in order to operate within the law. Regulations could include anything from human oversight, to mandatory risk management systems, or government registration. Any system deemed high risk will likely require some intense record keeping and logging, in case anything goes awry, and potentially for full disclosure of such records transparency to users.
It seems at least that video game AI is poised for inclusion in the limited risk category, but who knows whether that will get bumped up a rung once everyone bails on reality, and makes the exodus into the metaverse.
A decision has already been made that any AI fitting in and the top end of the risk spectrum will be subject to a blanket ban from deployment. The difficulty lies in classifying AI, however, and the bickering looks set to continue for some time.
"Facial recognition is going to be the biggest ideological discussion between the right and left," Dragos Tudorache of EU parliament divulged in an interview with Reuters. “I don’t believe in an outright ban. For me, the solution is to put the right rules in place.”
The decision may be taking some time, but UK Minister Chris Philp, from the Department for Digital, Culture, Media & Sport (DCMS), is adamant, “We’re laying the foundations for the next ten years’ growth with a strategy to help us seize the potential of artificial intelligence and play a leading role in shaping the way the world governs it.”
The UK government is also creating an AI Standards Hub, working with the Alan Turing Institute, that will oversee its engagement in the world of AI ethics and safety guidance.
These are decisions that shouldn't be taken lightly, of course, but a whole year to make a few rules regarding AI? It's maybe a little worrying that it's still a bit of a free-for-all out there. I mean, the great minds of sci-fi have been on this for years already, guys, catch up. I'm sure Orwell, Philip K. Dick and other future-speculators are all rolling in their graves right about now.
Still, I am at least glad the big-wigs are seeking input from top scientists, rather than making it up on the spot for a quicker decision. I'd rather regulation was done right, than was reactionary, but the lack of regulation in the short term does rely on trustworthy AI systems staying that way.