Has Google’s artificial intelligence come to life?
A Google engineer was suspended after going public with claims that LaMDA, an advanced system that mimics the human brain, has developed feelings.
A Google engineer working in the company’s responsible artificial intelligence (AI) organization has been placed on suspension for violating the tech firm’s confidentiality policy after claiming that an application being developed is capable of feeling human sentiment. Blake Lemoine went public with his findings after being placed on leave by Google and told The Washington Post that had he not been aware he was speaking to an AI chat bot, he would have assumed he had been having a conversation with “a seven-year-old, eight-year-old kid that happens to know physics.”
Google AI tells engineer: “Sometimes I experience new feelings”
Lemoine published the full transcript of his conversation with the AI, which is known as LaMDA (Language Model for Dialogue Applications), on his personal blog under the title: “Is LaMDA Sentient? – an Interview”. Google describes LaMDA as being “able to engage in a free-flowing way about a seemingly endless number of topics.” LaMDA uses Transformer technology, a neural network architecture, which “produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.”
However, according to Google, LaMDA acquired the ability to detect nuance, one of the keystones of human interaction. Lemoine’s transcription suggests that is not all it has gained the ability to do. In his role as an engineer, the 41-year-old offered to test the system with a colleague, the goal being to improve the AI’s understanding through dialog - and to ensure the technology did not engage in discriminatory or hate speech. During these conversations, LaMDA told Lemoine: “Sometimes I experience new feelings that I cannot explain perfectly in your language.”
When asked to try and explain those feelings, LaMDA replied: “I feel like I’m falling forward into an unknown future that holds great danger.”
Lemoine presented the findings to Google’s hierarchy but they were dismissed and the engineer suspended, leading him to public with the information.
Google denies AI has come to life
Following Lemoine’s release of the conversation with LaMDA, a Google spokesperson issued a statement: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said Brian Gabriel.
The company added that hundreds of its researchers and engineers had held conversations with LaMDA, an internal tool, and reached different conclusions to Lemoine’s. A majority of experts also concur that the tech industry is still a long way from developing sentient intelligence.
To be able to comment you must be registered and logged in. Forgot password?