Skip to main content

Google Engineer Claims AI Chatbot Is Sentient: Why That Matters

Is it possible for an artificial intelligence to be sentient?

Bubbles floating from brain

“I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient.

In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet.

The Right Words in the Right Place

“I was surprised by the hype around this news. On the other hand, we are talking about an algorithm designed to do exactly that”—to sound like a person—says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human—just open the chat box on almost any large consumer Web site. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient.

“First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress—and in neuroscience in particular—is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word,” says Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London. “What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”

“There is a lively debate about how to define consciousness,” Iannetti continues. For some, it is being aware of having subjective experiences, what is called metacognition (Iannetti prefers the Latin term metacognitione), or thinking about thinking. The awareness of being conscious can disappear—for example, in people with dementia or in dreams—but this does not mean that the ability to have subjective experiences also disappears. “If we refer to the capacity that Lemoine ascribed to LaMDA—that is, the ability to become aware of its own existence (‘become aware of its own existence’ is a consciousness defined in the ‘high sense,’ or metacognitione), there is no ‘metric’ to say that an AI system has this property.”

“At present,” Iannetti says, “it is impossible to demonstrate this form of consciousness unequivocally even in humans.” To estimate the state of consciousness in people, “we have only neurophysiological measures—for example, the complexity of brain activity in response to external stimuli.” And these signs only allow researchers to infer the state of consciousness based on outside measurements.

Facts and Belief

About a decade ago engineers at Boston Dynamics began posting videos online of the first incredible tests of their robots. The footage showed technicians shoving or kicking the machines to demonstrate the robots’ great ability to remain balanced. Many people were upset by this and called for a stop to it (and parody videos flourished). That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them.

It is a phenomenon we experience all the time, from giving nicknames to automobiles to hurling curses at a malfunctioning computer. “The problem, in some way, is us,” Scilingo says. “We attribute characteristics to machines that they do not and cannot have.” He encounters this phenomenon with his and his colleagues’ humanoid robot Abel, which is designed to emulate our facial expressions in order to convey emotions. “After seeing it in action,” Scilingo says, “one of the questions I receive most often is ‘But then does Abel feel emotions?’ All these machines, Abel in this case, are designed to appear human, but I feel I can be peremptory in answering, ‘No, absolutely not. As intelligent as they are, they cannot feel emotions. They are programmed to be believable.’”

“Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain,” two problems remain, Iannetti says. “The first is that, given the complexity of the system to be simulated, such a simulation is currently infeasible,” he explains. “The second is that our brain inhabits a body that can move to explore the sensory environment necessary for consciousness and within which the organism that will become conscious develops. So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it. This precludes the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”

In other words, having emotions is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Scilingo says. “Unlike a human, a machine cannot, to date, have experienced the emotion of fear.”

Beyond the Turing Test

But for bioethicist Maurizio Mori, president of the Italian Society for Ethics in Artificial Intelligence, these discussions are closely reminiscent of those that developed in the past about perception of pain in animals—or even infamous racist ideas about pain perception in humans.

“In past debates on self-awareness, it was concluded that the capacity for abstraction was a human prerogative, [with] Descartes denying that animals could feel pain because they lacked consciousness,” Mori says. “Now, beyond this specific case raised by LaMDA—and which I do not have the technical tools to evaluate—I believe that the past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about AI.”

“There is indeed a tendency,” Mori continues, “to ‘appease’—explaining that machines are just machines—and an underestimation of the transformations that sooner or later may come with AI.” He offers another example: “At the time of the first automobiles, it was reiterated at length that horses were irreplaceable.”

Regardless of what LaMDA actually achieved, the issue of the difficult “measurability” of emulation capabilities expressed by machines also emerges. In the journal Mind in 1950, mathematician Alan Turing proposed a test to determine whether a machine was capable of exhibiting intelligent behavior, a game of imitation of some of the human cognitive functions. This type of test quickly became popular. It was reformulated and updated several times but continued to be something of an ultimate goal for many developers of intelligent machines. Theoretically, AIs capable of passing the test should be considered formally “intelligent” because they would be indistinguishable from a human being in test situations.

That may have been science fiction a few decades ago. Yet in recent years so many AIs have passed various versions of the Turing test that it is now a sort of relic of computer archaeology. “It makes less and less sense,” Iannetti concludes, “because the development of emulation systems that reproduce more and more effectively what might be the output of a conscious nervous system makes the assessment of the plausibility of this output uninformative of the ability of the system that generated it to have subjective experiences.”

One alternative, Scilingo suggests, might be to measure the “effects” a machine can induce on humans—that is, “how sentient that AI can be perceived to be by human beings.”

A version of this article originally appeared in Le Scienze and was reproduced with permission.