fbpx

Type to search

UK Study Found AI Models’ Responses Collapsed Into Nonsense

Researchers found a disturbing drop in quality, which they called “model collapse,” when they set out to test AI tools on material already generated by AI models


AI (Artificial Intelligence) letters and robot hand miniature in this illustration
AI (Artificial Intelligence) letters and robot hand miniature in this Reuters image.

 

A study on artificial intelligence outcomes has created concern that AI errors, “hallucinations” and low quality results could undermine faith in machine-created content.

Some experts fear the problem is so serious – given recent findings and the fact that machines now translate most data and reports into multiple languages – it will erode public trust on written and pictorial content, and the internet itself.

Researchers at Oxford and Cambridge universities in Britain found a disturbing drop in quality, which they called “model collapse,” when they set out to test AI tools on material already generated by AI bots or Large Language Models.

 

ALSO SEE: Spotlight on Big Tech’s Power and Water Use Amid AI Surge

 

The study by Oxford’s Dr Ilia Shumailov and other researchers published in Nature in July found that when generative AI software relies solely on machine-created content its answers start to miss the mark after two prompts. By the fifth attempt, they had degraded significantly and, after a ninth query, ended in a worthless jumble.

“We find that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear. We refer to this effect as ‘model collapse’,” they said.

Analysts say the UK study shows AI loses touch with reality, for reasons not fully known, and gets lost in translation or the repetition of facts.

They say results of generative AI tools are important because more than half of text on the Net – 57% – has been translated via an AI algorithm, according to another report published in June.

Shumailov and his researchers said AI models need access to a stream of human-produced content to become sustainable over the long-term.

Tor Constantino, a writer for Forbes with an interest in AI, said: “Given the steady march toward AI model collapse, everything online may have to be verified via an immutable system such as blockchain or some ‘Good Housekeeping’ seal of approval equivalent to ensure trust.”

 

AI images on a path away from reality

For others, AI is part of a horrifying attack on truth and reality that has already begun, with smartphones getting devices such as Pixel 9’s ‘Magic Editor’, which can reportedly edit photos and leave little evidence they have been altered.

And for many, that may mark the end of an era when a photograph was widely regarded – by police, courts and the general public – as a representation of the truth.

As a writer in The Verge said recently: “This is all about to flip — the default assumption about a photo is about to become that it’s faked, because creating realistic and believable fake photos is now trivial to do. We are not prepared for what happens after.

“We briefly lived in an era in which the photograph was a shortcut to reality, to knowing things, to having a smoking gun. It was an extraordinarily useful tool for navigating the world around us. We are now leaping headfirst into a future in which reality is simply less knowable.”

 

  • Jim Pollard

 

ALSO SEE:

Humans Defeat AI Model in Document Summary Test – Crikey

Study Suggests Ways to Overcome High Failure Rate in AI Projects

AI Predicts 70% of Earthquakes in China Trial – ST

China Far Ahead of the US in Generative AI Patents

Generative AI Hallucinations Slow Down Industry Rollout

AI is ‘Effectively Useless,’ Veteran Analyst Warns – Fortune

Fears Rising on Impacts From Unrestrained AI Projects

China Ramps Up AI Push, Eyes $1.4tn Industry By 2030 – Xinhua

AI Chiefs Say Deepfakes a Threat to World, Call For Regulation

Generative AI Seen Having Big Impacts on Environment – Nature

Big Tech Exaggerating AI’s Threat to Humanity, Expert Says

 

Jim Pollard

Jim Pollard is an Australian journalist based in Thailand since 1999. He worked for News Ltd papers in Sydney, Perth, London and Melbourne before travelling through SE Asia in the late 90s. He was a senior editor at The Nation for 17+ years.