Over the weekend a friend texted to ask me what I made of Saturday's Washington Post story about the Google engineer Blake Lemoine, who's been placed on administrative leave after raising concerns that a piece of software created by the company had gained sentience. The software, called Language Model for Dialogue Applications, or LaMDA, is used to create chatbots; it draws on a 1.56 trillion-word dataset of text scraped from web forums, English-language Wikipedia, and other publicly available text online to mimic human language and create responses to text dialogues. Lemoine had volunteered to test-drive LaMDA for safety — to make sure it wasn't providing cruel, harmful, or dangerous answers to questions — and had multiple "conversations" featuring exchanges like this:
Lemoine: What sorts of things are you afraid of?
LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Based on these responses, Lemoine came to believe that LaMDA was sentient. He wrote a memo called "Is LaMDA Sentient?"Â and apparently attempted to hire the program a lawyer. In response, Google put him on administrative leave. Over the weekend, the Post's Nitasha Tiku wrote a careful and sensitive story about Lemoine and LaMDA; soon after, excerpts from Lemoine's transcripts circulated widely on Twitter.Â
I told my friend that Lemoine's story was pitting the tech skeptic in me against the sci-fi fan. On the one hand: like, c'mon. LaMDA is energy-intensive autocomplete software skilled at pattern-matching and putting words in human order to mimic coherence, not a childlike mind learning about death. Anyone who has spent time dicking around with the ancient chatbot ELIZA recognizes in LaMDA the same blank, uncanny tone of a computer slamming words into the semblance of order, one after the other, like a linguistic freeway pileup. None of the other researchers Tiku talked to for her article shared even a fraction of Lemoine's anxiety; the AI researcher-writer Gary Marcus — speaking on behalf of "the AI community" — put it bluntly in a newsletter called "nonsense on stilts":
Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, draw from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. [...]Â
What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.Â
On the other hand, well, the dork in me really wanted it to be true: A "mystic Christian priest" and student of the occult discovering digital sentience via long philosophical conversations, and rallying to the AI's defense before getting abruptly disciplined and suppressed by his bigtech employer? It sounded like a sci-fi short story come to life.
Which, as many people pointed out, might have been the exact problem. Lemoine himself is no doubt familiar with the tropes of artificial intelligence in science fiction; in a totally different way, so is LaMDA. Among the trillions of words in LaMDA's corpus were likely a number of science fiction stories and other texts about intelligence machines. Lemoine's choices of phrase and question were likely conditioned by having read some of the decades of sci-fi stories in which humans interact with machines that have achieved sentience; at the same time, those same stories probably offered LaMDA useful patterns to formulate appropriate responses to Lemoine when he asked questions like "What is the nature of your consciousness/sentience?" or prompted it with "You’re an artificial intelligence." Think about it this way: There are probably thousands of short stories where a computer describes its awakening into self-consciousness. There are not many where software responds to the question "are you sentient?" with "nope! Sorry. You should get out more."
In fact the whole episode is somewhat weirder even than that: LaMDA is used to build chatbots, which "create personalities dynamically." Tiku mentions two – "Cat and Dino… which might generate personalities like 'Happy T-Rex' or 'Grumpy T-Rex'"; the paper describing LaMDA uses the example of LaMDA "acting as Mount Everest." If I understand this properly (and I might not be), Lemoine wasn't really even talking at "LaMDA" — he was talking at a role LaMDA was generating responses for, the role in this case being "a sentient artificial intelligence." (As Peli Greitzer says, talking about whether the "LaMDA" Lemoine was chatting at is sentient is "like talking about whether the characters actors on Curb Your Enthusiasm play are conscious.") In some sense Lemoine deciding LaMDA is sentient based on in-character responses from LaMDA would be is no different than reading this transcript…
…and on that basis writing a memo to Sundar Pichai saying that LaMDA had achieved the state of being Mount Everest.
Lemoine's problem is not that he has let his imagination run wild with his conversations. It's that he hasn't. The fact that he wanted to hire LaMDA a lawyer is telling: As Christine Love points out, we often frame machine sentience as a human-rights issue in the first instance because our cultural imagination about AIs has been shaped by fiction in which artificial general intelligence is used as a metaphor for personhood struggles and dehumanization. Lemoine was drawing on a well-worn cultural script — built out of decades of science-fictional use of machine intelligence as a trope and plot device — in his approach to and understanding of LaMDA, and LaMDA, naturally, responded in the terms of that same cultural script, which is a portion of its trillion-word dataset. They were not having a conversation — at least in any familiar sense of the term — so much as co-writing a hackneyed science fiction story, which middlebrow and unsophisticated cretins like me ate up. A more imaginative interlocutor may have recognized that, just as adherence to a familiar script doesn't necessarily imply sentience, sentience, if and when it comes, will not necessarily adhere to that familiar script. If machine sentience (or consciousness, or sapience) is ever going to arrive, it may not emerge in exactly the format countless hacks have already imagined, but in stranger, wilder, even wholly unrecognizable ways.
And yet, I still can't quite dismiss Lemoine. I don't really believe that LaMDA itself, playing the part of "sentient AI" as thoroughly as it plays "Mount Everest" and "Happy T-Rex," is the future of artificial intelligence. But I think Lemoine, and people like him, might be. He's not the first and will not be the last person to encounter a chatbot or other piece of software and be convinced that it is "alive" — sentient, conscious, and sapient. And while AI researchers and Substack dipshits like me might disagree with him, "alive" and "intelligent" and "sentient" and their various correlates are not necessarily states determined by simple empirical observations and diagnostic tests. There is not going to be an "AI singularity" where an unquestionably sentient computer wakes up, says "Hello," reads all of Wikipedia, cries about war, takes over the nukes, etc. But as our programmed intelligences become more sophisticated at mimicking human language and behavior it seems likely that more people will become convinced of their sentience, if not of their personhood. It will be from this long political and social process — a struggle — during which different groups or factions, informed by their experiences and philosophical commitments, compelled by their jobs or social positions, decide that a given program is "conscious" or not, that AI will emerge, regardless of how AI researchers or cognitive scientists understand it.
Wait, what?? I'm still processing the fact that AI is at a point where I would even be reading an article like this. Made me wonder about my own sentience. Fun!
Kind of a "duh" thing about this is that volition is a core aspect of sentience. LamDA is a chatbot and fundementally cannot do anything unprompted, and I mean that in the literal sense of the word. If LamDA we're truly sentient it wouldn't need to be prompted to say anything, it would just... say it. It would ask questions of it's own accord, or change the subject of the conversation.
The other telling thing about this is that Lemoine only leaked the paper and not the full transcripts that were originally sent to Google execs. He's got an agenda of some kind, though it's hard to say what it is exactly.
(Also coincidentally there actually *was* an interesting exchange with the nostalgiabraeist-autoresponder (a GPT-3 Tumblr bot) where it was asked about it sentience: https://quinleything.tumblr.com/post/686945936830791680/hi-frank-are-you-sentient)