AI has been making huge leaps in terms of scientific research, and companies like Nvidia and Meta (opens in new tab) are continuing to throw more resources towards the technology. But AI learning can have a pretty huge setback when it adopts the prejudices of those who make it. Like all those chatbots that wind up spewing hate speech (opens in new tab) thanks to their exposure to the criminally online.
According to Golem (opens in new tab), the OpenAI might have made some headway on that with its new successor to the GPT-3, the autoregressive language model that uses deep learning in an effort to appear human in text. It wrote this article (opens in new tab), if you want an example of how that works.
How to buy a graphics card (opens in new tab): tips on buying a graphics card in the barren silicon landscape that is 2021
But GPT-3 also has a tendency to parrot incorrect, biased, or outright toxic notions thanks to all the sources of information. These biases would impact the language, causing GPT-3 to make bigoted assumptions or implications in its writing. It’s not too different from humans, in that all these reinforced ideas can easily look like truths, and there’s plenty of outdated notions to choose from. GPT-3 seems a bit like the weird Uncle you don’t talk to on Facebook.
The new InstructGPT (opens in new tab) is said to be an improvement as its answers are "more truthful and less toxic”. This has been achieved thanks to the work of researchers at Open AI, who’s alignment research helps the machine process instructions more accurately, despite being much smaller. InstructGPT uses 1.3 billion parameters, which is a fraction of the 175 billion used by the older GPT-3 model but thanks to reinforcement learning with human feedback, has simply been better trained. The quality of InstructGPT’s answers are assessed and reported on by researchers, hopefully shaping it to be a better bot overall.
That being said, though InstructGPT seems like a promising step up, it's still far from perfect. "They still generate toxic or biased results, fabricate facts and generate sexual and violent content without explicit request" according to the researchers at OpenAI but it’s still less than the older GPT-3. Perhaps in a few generations we’ll see a language AI that’s a bit further unravelled from some of the worst aspects of humanity.