Everyone knows the AI business model is steal first, ask permission later, and Google's CEO is clinging to the notion of opt-outs as justifying it: 'We do give people those rights'
"There was clearly a lot of excess investment, but none of us would question whether the internet was profound."
The head of Google parent company Alphabet, Sunder Pichai, has given a new interview to the BBC in which he says fears of the AI bubble bursting may well be justified. "I think no company is going to be immune, including us," said Pichai.
We'll return to the bubble, but the section of the interview that really jumped out at me related to current AI use and in a wider sense the business model. AI companies have until now relied on a fair use defense for their unprecedented use of copyrighted content to train their models, via the process known as "scraping."
The BBC's Faisal Islam asks Pichai about this model, mentioning the multiple court cases that have sprung up in different jurisdictions, and drives towards a literal million dollar question: will these tech companies at one point have to retroactively pay for this use of copyrighted material?
"First of all, to step back, it is so important as we go through this that we both help drive creativity and innovation, but we have to do that in a framework which respects creatives' rights," says Pichai. "As well as a love for transformative use to deliver benefits to society. I think we're committed to copyright frameworks in all the countries we operate in."
Pichai then delivers the current go-to defense for AI companies, which is that many of them (not all) have incorporated opt-outs into their models.
"Today when we train we give people an option to opt-out of the training," says Pichai. "And we honour copyright in terms of how our outputs are generated."
That last line strikes me as especially weaselly. The question here is not whether these models output copyrighted work (even though they often do!). It's about whether they should be using copyrighted work in the first place.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
Also: I can't figure out where this supposed opt-out is. I'm sure it does exist but, if you told me I could tell Google not to train its models on my YouTube videos by opting out, I wouldn't be able to tell you where the option to opt out is found. Anyway:
"We are in the process of working with the industry to create newer frameworks as we work through it, for example in YouTube we have always incorporated an approach to deliver value back to content rights holders," says Pichai. "We will apply those same principles through the AI moment, and I think it's super important to do that and so we're committed to getting it right."
I find this all super unconvincing, and Islam goes on to cite an example of an artist unhappy with the way their work is being used. The legendary musician Elton John recently said the AI firms were "committing theft, thievery on a high scale" and offered some excoriating criticism of the UK government's failure to protect creative industries.
Sir Elton also made the point that he has money and a platform to fight back, whereas younger creatives "haven't got the resources to fight big tech." His perspective, and it's one many would agree with, is that the onus should be on AI companies to ask first, rather than being allowed to rely on an opt-out: "It's criminal, in that I feel incredibly betrayed."
Pichai's response is deeply unconvincing: "Look, I mean today we allow anything we train, people can choose whether their content is opted into that training or not, so we do give people those rights."
The idea that big tech is "giving" people the rights to their own creative works strikes me as deeply wrong, and gets to the heart of the issue, which is that these companies are so large and so powerful they can do what they like: and governments seem unable or unwilling to constrain them, while the fight is left to individuals like Elton John and institutions like the New York Times that can afford to pick it.
What could go wrong, eh? Well, this may lead some to actually hope that the bubble bursts, and a lot of folk think that's where we're heading. Alphabet's value has roughly doubled in the last year to an astonishing $3.5 trillion, which to my admittedly untrained eye still looks like a better bet than Nvidia's recent world-first $5 trillion valuation. Both of these companies are at least highly profitable: OpenAI, on the other hand, is currently valued at around $500 billion with Reuters recently reporting plans for a $1 trillion IPO, and is just burning through cash with the promise of jam tomorrow.
No wonder that some of the cooler heads in the room are looking at where AI is and wondering whether this is just another dotcom bubble. That saw the valuations of many early internet companies soar in the late 1990s, before a collapse in early 2000 saw a lot of people lose a lot of money. Only last month Jamie Dimon, the boss of JP Morgan and widely regarded as one of the world's top bankers, said a tonne of money "would probably be lost" on AI.
"We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound," said Pichai. "I expect AI to be the same. So I think it's both rational and there are elements of irrationality through a moment like this."
So there you have it. These companies are going to take your content and use it as they will, while pointing to an opt-out that's essentially meaningless for the average person. And if you'll allow me to put my Nostradamus hat on, a lot of them are soon enough going to go bust, plunge the global economy into recession, and automate as many jobs as they can in the aftermath. The glorious AI future is here: why don't you scrape that up, Gemini.
2025 games: This year's upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.


