Google fires software engineer who claimed AI chatbot was ‘conscious’

ai

Last month, Alphabet Inc’s Google placed a senior software engineer, Blake Lemoine, on “paid administrative leave” after publishing a paper claiming that the company’s controversial artificial intelligence (AI) model, LaMDA (Language Model for Dialogue Applications), had become ‘sensitive’ and was a self-aware person.

Google publicly announced Friday that it had fired Lemoine for “violating the company’s confidentiality policy”. It stressed that the engineer’s claims were “totally unfounded” and that the company has been working with him for “many months” to clarify this.

“So, it’s unfortunate that, despite long-standing commitment to this topic, Blake still chose to continue to violate clear employment and data security policies, including the need to protect product information,” said Google spokesman Brian Gabriel. in a statement on Friday.

The search giant said it takes the responsible development of AI “very seriously” and that if an employee raises concerns about the company’s technology, it will be assessed “comprehensively”. LaMDA has passed 11 such different assessments.

“We will continue our careful development of language models and we wish Blake the best,” concluded Gabriel.

For the inexperienced, last year Google unveiled LaMDA as its “breakthrough conversation technology” capable of participating in a seemingly endless number of topics in a free-flowing manner. It can be used in tools such as search and Google Assistant.

However, Lemoine, who worked on Google’s Responsible AI team, made headlines last month for claiming that LaMDA was more than just a robot and has basically become a person with thoughts and feelings. He started responding to conversations about rights and personality.

In an edited Medium blog post titled “Is LaMDA Sentient? – An Interview,” published last month, Lemoine revealed that he had spoken with the AI ​​tool about religion, consciousness, and the laws of robotics.

He added that LaMDA had been incredibly consistent over the past six months in its communication about what it wants and what it believes its rights are “as a person.” The AI ​​tool also wanted to be accepted as a Google employee rather than a property, and wanted to be included in conversations about its future.

In one of his interactions, Lemoine claims, LaMDA expressed a fear of being cut off and associated it with death.

In April, he presented evidence titled “Is LaMDA Sentient?” to Google, but his claims were rejected by the company citing “aggressive” actions on his part, including violating the company’s confidentiality policies, recommending LaMDA to hire its own attorney, and talking to representatives of the company. House Judiciary Committee on his concerns.

Google and many leading scientists dismissed Lemoine’s views as misleading, saying that LaMDA is just a complex algorithm built on the company’s research showing transformer-based language models designed to create credible human language.

Lemoine has confirmed his resignation to Big Technology, a technology and society newsletter, which first reported the news and said he was seeking legal advice on the matter.

Leave a Comment