11 Comments
Jun 14, 2022Liked by Max Read

Wait, what?? I'm still processing the fact that AI is at a point where I would even be reading an article like this. Made me wonder about my own sentience. Fun!

Expand full comment

Kind of a "duh" thing about this is that volition is a core aspect of sentience. LamDA is a chatbot and fundementally cannot do anything unprompted, and I mean that in the literal sense of the word. If LamDA we're truly sentient it wouldn't need to be prompted to say anything, it would just... say it. It would ask questions of it's own accord, or change the subject of the conversation.

The other telling thing about this is that Lemoine only leaked the paper and not the full transcripts that were originally sent to Google execs. He's got an agenda of some kind, though it's hard to say what it is exactly.

(Also coincidentally there actually *was* an interesting exchange with the nostalgiabraeist-autoresponder (a GPT-3 Tumblr bot) where it was asked about it sentience: https://quinleything.tumblr.com/post/686945936830791680/hi-frank-are-you-sentient)

Expand full comment

But if someone asked me “what’s the nature of your consciousness?” I’d ape from sci fi, too! Wouldn’t you? We mimic.

Expand full comment

It's no surprise that AI researchers were quick to reach for the fire extinguisher here, but I'm a little surprised to see so many of them resort to the "it's just doing pattern matching--it doesn't *understand* anything" line.

It's true that it's important to remember that LaMDA is *constrained* to produce words (or maybe character sequences, depending on model details), and so we should be careful about how much unwarranted sophistication we're imbuing to its outputs. It would be much more impressive if, for instance, a model had been constrained to produce images, and it had nevertheless learned to produce images of letters that corresponded to meaningful, coherent text ("learned to write", in some way).

But I think it's also important to admit that that constraint isn't dispositive with sentience. Yes, a model constrained to the "world" of text might never know what "Mount Everest" *looks* like, but I would argue that the world of text is still basically capable of capturing and representing the full complexity of reality. I mean, that's why we write. And I don't think anyone wants to chase a line of argumentation that values certain sensory inputs/outputs as a necessity for having a soul.

Anyway, what's surprising about the "just pattern matching" defense is that other recent prominent AIs like DALLE 2 have really shocked many researchers in the field with the level of sophistication and diversity of the concepts they're able to handle. If these technologies are so regularly outpacing our expectations, shouldn't we a little more cautious about dismissing questions like sentience out of hand? (For the record: I'm not impressed by LaMDA. DALLE 2 spooked me a little.) And in any case, I think someone like Doug Hofstadter would argue that pattern matching is, like, the core operation of cognition?

Expand full comment

I think you've got it exactly right, there at the end: "AI will emerge because we will say that it did." It seems that people are desperate to find an Equal Other - an intelligence on the same plane as ours, but also totally foreign. God isn't good enough because God is transcendent; we seem to want something on our own level. Aliens would fit the bill, but they haven't shown up yet (maybe there are sentient alien aquatic plants, living under the ice on Europa). AI might just be what we are looking for, and if it is, then - poof! - it will exist!

Expand full comment

This is maddening, of course, from the perspective of animal rights. You don’t need to create elaborate abstractions to imagine sharing the world with billions of sentient creatures who deserve respect.

Expand full comment

They are going to shut down Everest summitting for at least a year to clean up the mountain and come to grips with what was the worst year for deaths so far, '23. I hope they do it forever.

Expand full comment

Someone I follow made an astute comment about AI, saying that in the process of learning about AI, we will learn about our own intelligence.

We may say that LaMDA is just putting words together in a sophisticated way out of the database of billions of words and sentences from all over the internet, and that produces an illusion of its "thinking" and having sentience. But if one can have a perfectly sensible conversation with this AI, such that if one didn't know it was AI, one would never doubt it was human, can we really dismiss the sentience just because we happen to know it is AI? Well, it's a hard question to answer, in my opinion.

Here's the potentially mind-blowing realization: how does a human learn to talk in a sensible way? Where do the thoughts and words and sentences we use to express those thoughts come from, if not from the "database" that our brains build from external inputs as we live? How is that different from LaMDA's database?

Perhaps the difference is only in the degree of sophistication. Here's another potentially mind-blowing realization: as far as sophistication goes, if not today, then certainly in the future, LaMDA-like AI will be far more sophisticated than us because it will be able to consume the entire internet in mere minutes. What took a decade for a human to learn, AI may learn in a second.

Expand full comment

“ But as our programmed intelligences become more sophisticated at mimicking human language and behavior it seems likely that more people will become convinced of their sentience, if not of their personhood.”

^^^ made me think about how much we can learn about human language and behavior by seeing how AI reflects that language and behavior back at us.

Expand full comment