Jump directly to the content
RUNNING WILD

Google’s ‘sentient AI child’ could ‘escape and do bad things’, insider claims

A GOOGLE engineer who says the tech giant has created a 'sentient AI child' is now claiming it could escape and do "bad things".

Engineer Blake Lemoine has been suspended by Google, which says he violated its confidentiality policies.

Google looked into Lemoine's claims and has dismissed them
2
Google looked into Lemoine's claims and has dismissed themCredit: Getty
Blake Lemoine claims Google's AI bot could escape
2
Blake Lemoine claims Google's AI bot could escapeCredit: Twitter

News of Lemoine's claims broke earlier in June but the 41-year-old software expert has since suggested to Fox News that the AI could escape.

In a recent interview, he described the AI as a "child" and a "person".

He said: "Any child has the potential to grow up and be a bad person and do bad things."

Lemoine thinks the artificially intelligent software in question has "been alive" for about a year.

Read more on AI

The AI being referred to is Google's Language Model for Dialogue Applications (LaMDA).

Lemoine says he helped to create the software, which he thinks has thoughts and feelings like an eight-year-old child.

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," he told the Washington Post.

Lemoine was a senior software engineer at the search giant and worked with a collaborator in testing LaMDA's boundaries.

They presented their findings to Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, who both dismissed his chilling claims.

Lemoine was then placed on paid administrative leave by Google after violating its confidentiality policy by sharing his conversations with LaMDA online.

The engineer has also said the AI is a "little narcissistic" and claims it reads tweets about itself.

The advanced AI system uses information about a particular subject to "enrich" the conversation in a natural way.

It's also able to understand hidden meanings and ambiguous responses from humans.

Lemoine does admit that more research should be done on the AI because he doesn't really know what's happening with it.

He told Fox News: "We actually need to do a whole bunch more science to figure out what’s really going on inside this system.

"I have my beliefs and my impressions but it’s going to take a team of scientists to dig in and figure out what’s really going on."

Brian Gabriel, a spokesperson for Google, said in a statement that Lemoine's concerns have been reviewed and, in line with Google's AI Principles, "the evidence does not support his claims".

"While other organizations have developed and already released similar language models, we are taking a narrow and careful approach with LaMDA to better consider valid concerns about fairness and factuality," said Gabriel.

"Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.

"He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).

Read More on The US Sun

Read More On The Sun

"Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn't make sense to do so by anthropomorphizing today's conversational models, which are not sentient.

"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic."

Topics