OpenAI is being sued by a Georgia radio host because ChatGPT accused him of a crime he did not commit

2001: A Space Odyssey - HAL 9000 eye
(Image credit: Stanley Kubrick Productions)

In April, Australian politician Brian Hood sued ChatGPT company OpenAI after the chatbot incorrectly identified him as a criminal. Now the company is being sued again, this time in the US, for similar reasons: ChatGPT identified radio host Mark Walters as being accused of embezzling more than $5 million from a non-profit called the Second Amendment Foundation, an accusation that's never actually been made.

According to the lawsuit (via The Verge), a journalist named Fred Riehl asked ChatGPT about a separate lawsuit he was reporting on, The Second Amendment Foundation v. Robert Ferguson. When asked to provide a summary of the complaint, ChatGPT said it had been filed against Walters after he allegedly "misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF's leadership."

But none of that is true: There is no such accusation, and Walters isn't named in the lawsuit at all. Yet when Riehl requested the specific portion of the lawsuit relating to Walters, ChatGPT provided one; he then asked for the entire complaint, and again, the chatbot delivered. The problem, according to Walters' suit, is that all of it was "a complete fabrication" with no resemblance to the actual Second Amendment Foundation lawsuit—even the case number is wrong.

The good news for Walters is that none of what was provided to Riehl by ChatGPT was published. It's not clear whether this was a test of some sort, or if Riehl simply sensed that something was fishy, but he contacted one of the plaintiffs at the Second Amendment Foundation, who confirmed that Walters had nothing to do with any of it. But even though Riehl didn't publish it (and it's not clear how Walters subsequently found out about it), Walters' lawsuit states that by providing the false allegations to him, "OAI published libellous matter regarding Walters."

As The Verge explains, Section 230 of the Communications Decency Act generally protects internet companies from being held legally liable for third-party content hosted on their platforms—simplistically, you can't sue Reddit for a message that somebody posted on it. But it's not clear how that will apply to AI systems, if at all: They rely on external links for information, but they also use their own systems to generate "new" information that's ultimately provided to users. That could exempt ChatGPT and OpenAI from Section 230 protections.

But it may not matter. UCLA law professor Eugene Volokh wrote on Reason.com that while he believes libel cases related to AI are "in principle legally viable," this case in particular may not be because Walters apparently did not give OpenAI a chance to correct the record and stop making false statements about him, and because there were no actual damages involved. So while that odds are good that someday, some AI company will take a beating in the courtroom because its chatbot spun some nonsense tale that landed a real person in hot water, this case may not be it. I've reached out to OpenAI for comment and will update if I receive a reply.

Andy Chalk

Andy has been gaming on PCs from the very beginning, starting as a youngster with text adventures and primitive action games on a cassette-based TRS80. From there he graduated to the glory days of Sierra Online adventures and Microprose sims, ran a local BBS, learned how to build PCs, and developed a longstanding love of RPGs, immersive sims, and shooters. He began writing videogame news in 2007 for The Escapist and somehow managed to avoid getting fired until 2014, when he joined the storied ranks of PC Gamer. He covers all aspects of the industry, from new game announcements and patch notes to legal disputes, Twitch beefs, esports, and Henry Cavill. Lots of Henry Cavill.