EU outlines rules aiming to make it clear when content has been generated by an AI

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.
(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

European Parliament today agreed on how its proposed rules on AI will look ahead of being formally agreed upon by EU member states. The new rules aim to make it easier to spot when content has been AI generated, including deep fake images, and would completely outlaw AI's use in biometric surveillance, emotion recognition, and predictive policing.

The new rules would mean AI tools such as OpenAI's ChatGPT would have to make it clear that content is AI generated, and would have some responsibility for ensuring users know when an image is a deep fake or the real deal. That seems a mighty task, as once the image is generated it's tough to limit how a user shares it, but that might be something these AI companies have to figure out in the near future.

If these new rules were to pass through European Parliament as is, AI models would need to release "detailed summaries" of copyrighted data used in training to the public. For OpenAI, specifically, this would force it to unveil its training data for its massive GPT-3 and GPT-4 models used today, which are currently not available to peruse. There are some big datasets used for training AI models that already make this data available, such as LAION-5B.

There would also be AI uses that are entirely prohibited, specifically those that could encroach on EU citizens' privacy rights.

  • "Real-time" and "post" remote biometric identification systems in publicly accessible spaces.
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation).
  • Predictive policing systems (based on profiling, location or past criminal behaviour).
  • Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

These rules are yet to actually be enshrined into law just yet. Ahead of that, member states get to jump in with any propositions of their own, and that process will begin later today. Expect the finalised rules for AI to look similar to these proposed ones, however. The EU seems dead set on making sure it has the jump on AI and its potential uses—in as well any government can, anyways. 

Image


Best SSD for gaming: The best solid state drives around
Best PCIe 4.0 SSD for gaming: Speedy drives
The best NVMe SSD: Slivers of SSD goodness
Best external hard drives: Expand your horizons
Best external SSDs: Fast, solid, and portable

Jacob Ridley
Senior Hardware Editor

Jacob earned his first byline writing for his own tech blog. From there, he graduated to professionally breaking things as hardware writer at PCGamesN, and would go on to run the team as hardware editor. Since then he's joined PC Gamer's top staff as senior hardware editor, where he spends his days reporting on the latest developments in the technology and gaming industries and testing the newest PC components.