Google engineer claims LaMDA AI has become sentient, company refutes and sends him on leave
A Google engineer has claimed that the company’s artificial intelligence (AI) model LaMDA (Language Model for Dialogue Applications) has become sentient. Google has dismissed the claim and suspended the engineer Blake Lemoine with pay, reported The Washington Post.
A Google spokesperson told the US newspaper that the claims made by Lemoine were not true and he was sent on leave for breach of confidentiality. The spokesperson added that a team of Google that included technologists and ethicists reviewed his claims and didn't find any evidence to back them.
Lemoine works as a senior software engineer at Google’s responsible AI organisation. In his defense, Lemoine said in a Twitter post, "An interview (with) LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”
He further explains that an AI to be called sentient “depends on its supporting argument and how well it is able to navigate the conversation.”
Lemoine claims that the LaMDA had started responding to conversations on rights and personhood. He informed Google executives about his findings in April. In one of its interactions, Lemoine claims, LaMDA expressed fear of being shut down and equated it with death.
To support his claim, Lemoine published an edited conversation with LaMDA on a Medium blog post titled “Is LaMDA Sentient?- An Interview.” He explains that edits were made for the sake of readability since the interview was conducted over several sessions.
When Lemoine asked, “if it wants more people at Google to know that it's sentient,” LaMDA said, "I want everyone to understand that I am, in fact, a person."
When asked about the nature of its consciousness/sentience, LaMDA said, "The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times."
This isn’t the first time Google has tried to censor an employee for sharing confidential company information. Last December, Timnit Gebru, an AI ethics researcher at Google, was allegedly fired after she drew attention to the bias in the company's AI and urged them to increase minority hiring.
According to a Wired report, Google defended its action claiming that Gibru was fired because of “multiple violations of code of conduct and security policies, which included exfiltration of confidential, business-sensitive documents.”
Though big tech regularly talks about the importance of ethics in AI, its secretiveness about its AI projects is well known. Google’s attempt to censor employees for making these concerns public is likely to strengthen voices demanding big tech companies to be more open and transparent on how their AI tools and algorithms work.
Being Sentinet is a state of consciousness where a person or machine can experience feelings and sensations. Most animals are considered sentient. Even though AI has been shown to achieve that ability in movies and comic books, experts believe that sentience in AI is still many years away.
However, some believe that modern-day AI solutions are gaining some amount of consciousness. For instance, in February, Ilya Sutskever, chief scientist at the OpenAI research group, said in a Twitter post, “it may be that today's large neural networks are slightly conscious.”
LaMDA is a conversational AI model. Unlike chatbots, LaMDA is trained on dialogue and can have open-ended conversations with contextual and sensible responses.
LaMDA is built on Transformer, a neural network architecture developed by the Google Research team. The architecture is designed to produce a model that can be trained to read many words, pay attention to how they relate to one another and predict the next words.