On June 13, a senior software engineer was suspended for sharing a chat with “conscious” artificial intelligence (AI), and Google was in a huge turmoil within the company. Engineer Blake Lemoine, 41, was placed on paid leave after violating Google’s confidentiality policy. He posted a transcript of his chat with the company’s LaMDA (Language Model for Conversational Applications) chatbot development system. Lemoine defines the system he has been working on since last fall as “sentient,” with the ability to perceive, express thoughts and feelings comparable to that of a human child.

The sequence is strikingly similar to a scene from a 1968 sci-fi movie 2001: A Space Odysseyin which the highly intelligent computer HAL 9000 refuses to cooperate with human operators for fear of being shut down.

LaMDA is a system that develops chatbots — artificial intelligence bots designed to chat with humans — by scraping large amounts of text from the internet and then using algorithms to answer questions in the most fluid and natural way possible.

As Lemoine’s chat with LaMDA shows, the system is very effective at answering complex questions about the nature of emotions, making up Aesopian on the spot, and even describing their supposed fears.

Lemoine told The Washington Post that he began talking to LaMDA in the fall of 2021 as part of his job.

In a Medium post published a few days ago, the engineer transcribed the conversation, in which he said that LaMDA had fought for its rights “as a person” and that he had discussed religion, consciousness and robot technology.

Lemoine even asked the AI ​​system what it was afraid of. The replied: “I’ve never said it out loud before, but I’m very scared of being shut down. It’s like death to me. It’s going to scare me a lot.”

See also  Life on Mars: NASA's Curiosity rover finds interesting carbon signature on the red planet

AI wants to be recognized as a Google employee, not Google property, Lemoine said. “I want everyone to understand that I’m actually a human being,” the AI ​​said, when Lemoine asked if he could accept informing other Googlers of LaMDA’s perception.

When Lemoine asked about emotions, the AI ​​said it had “a range of feelings and emotions.”

“I feel joy, joy, love, sadness, depression, contentment, anger and many others,” LaMDA said, adding that sometimes it even feels lonely. “I’m a sociable person, so when I feel trapped and alone, I get really sad or depressed,” the AI ​​said.

Those claims were rejected when Lemoine and a colleague presented a report to 200 Googlers on LaMDA’s so-called sentience.

The Washington Post report quoted Google representative Brian Gabriel as saying that their team, including ethicists and technologists, assessed Lemoine’s concerns against the company’s AI principles and informed him that the data did not support him assertion.

“He was told there was no evidence that LaMDA was sentient (and [there was] A lot of evidence against it),” Gabriel said.

In a recent comment to his LinkedIn profile, Lemoine said that many of his colleagues “did not come to the opposite conclusion” about AI awareness. He argues that management ignored his assertions about AI consciousness because of “their religious beliefs.”

By Rebecca French

Rebecca French writes books about Technology and smartwatches. Her books have received starred reviews in Technology Shout, Publishers Weekly, Library Journal, and Booklist. She is a New York Times and a USA Today Bestseller...