ChatGPT 'makes stuff up' university lecturer claims after AI chatbot confessed to lying
EXCLUSIVE: Cambridge University lecturer Dr Chris Smith claims he put the popular ChatGPT AI bot to the test and it failed, even allegedly confessing to lying in the process
A Brit boffin has blasted psycho scumbag chatbots for lying.
Consultant virologist Dr Chris Smith said he tested ChatGPT and found it simply made things up.
When the Cambridge University lecturer confronted the AI tool and accused it of fibbing it immediately confessed - and added: "I shouldn’t have done that, should I?’’
Dr Smith, 50, who hosts BBC Radio shows The Naked Scientists and 5 Live Science, said before long the whole of human knowledge will become polluted with false information made up by artificial intelligence.
He said: "AI confabulates - it makes stuff up. It’s part and parcel of how it works. I’ve seen this for myself. It’s very concerning. You can see this won’t end well.’’
He and fellow medics decided to test the chatbot - created three years ago by US artificial intelligence research giant OpenAI - to see how it responded to a detailed boffinry challenge.
They asked it to compile a list of `eponymous syndromes’, diseases or conditions named after a person.
The individuals are usually the physicians or health care professionals who identified the illness and occasionally patients.
Examples include Alzheimer’s, named after German psychiatrist Alois Alzheimer, Down’s Syndrome after Brit physician John Langdon Down and Bell’s Palsy after Scottish surgeon Sir Charles Bell.
"There are loads of them,’’ Dr Smith said.
"We asked ChatGPT to list some of them and we were working down the list and there were some we’d never heard of.
"So we looked them up and they didn’t exist.
"And then we asked ChatGPT why it had done this it just said 'You’re right. They don’t exist. I shouldn’t have done that should I?’
"It was very, very strange.’’
Dr Smith said the experiment had shattered his faith in bots and left him concerned for the future of mankind.
"The bigger worry is that these things can harm someone’s reputation,’’ he said.
"But also there is now a whole industry creating content online using these AI programmes.
"And If these systems are confabulating like this then we are slowly polluting the knowledge base with rubbish.
"It’s a bit like plastic pollution.
"It will accumulate and accumulate into the future making it much harder to fact check.
"And when people do go online to check something they will find loads of references suggesting it’s true when in fact it was dreamt up by AI.’’
ChatGPT generates human-like conversational responses which users can refine to a desired length, format, style, level of detail and language.
Credited with stimulating the AI boom, in 2023 it became the fastest-growing consumer software application in history gaining 100 million users in two months and last month (feb) was the seventh most-visited website in the world.
OpenAI failed to respond to a request for comment last night (wed).