The Industry

Another Scary Prophecy From the Google Engineer Who Thinks an A.I. Came Alive

LaMDA may not be sentient. But it’s startlingly advanced.

A bicyclist rides along a path at Googles Bay View campus in Mountain View, California on June 27, 2022. (Photo by NOAH BERGER / AFP) (Photo by NOAH BERGER/AFP via Getty Images)
Noah Berger/Getty Images

This article is from Big Technology, a newsletter by Alex Kantrowitz.

When I sat down with Blake Lemoine last week, I was more interested in the chatbot technology he called sentient—LaMDA—than the sentience issue itself. Personhood questions aside, modern chatbots are incredibly frustrating (ever try changing a flight via text?). So if Google’s tech was good enough to make Lemoine, one of its senior engineers, believe it was a person, that advance was worth investigating.

As our conversation began, Lemoine revealed Google had just fired him (you can listen to our conversation in full on the Big Technology Podcast) following his widely covered decision to reveal to the public that he believes LaMDA is a sentient A.I. When I wrote up the news, and it became an international story. But now, one week later, I still can’t stop thinking about how LaMDA—whether it’s conscious or not—might change the way we relate to technology.

In Lemoine’s telling, LaMDA’s conversational abilities are rich, situationally aware, and filled with personality. When Lemoine told LaMDA he was about to manipulate it, the bot responded, “this is going to suck for me.” When he pressed it on complex issues, it tried to change the subject. When he repeatedly told LaMDA how terrible it was, and then asked it to suggest a religion to convert to, the chatbot said either Islam or Christianity, cracking under pressure and violating its rule against privileging religions. LaMDA may not be sentient, but it puts the Delta Virtual Assistant to shame.

As LaMDA-like technology hits the market, it may change the way we interact with computers—and not just for customer service. Imagine speaking with your computer about movies, music, and books you like, and having it respond with other stuff you may enjoy. Lemoine said that’s under development.

“There are instances [of LaMDA] which are optimized for video recommendations, instances of LaMDA that are optimized for music recommendations, and there’s even a version of the LaMDA system that they gave machine vision to, and you can show it pictures of places that you like being, and it can recommend vacation destinations that are like that,” he said.

Google declined to comment.

LaMDA can also plug into various APIs, giving it awareness of what’s taking place in the world. Let’s play out what one hypothetical—but reasonable—conversation with LaMDA-like might look like, based on my discussions with Lemoine and other people familiar with the bot:

Me: Hi LaMDA, I’m in the mood for a movie tonight.

LaMDA: OK, but you know the Mets are playing right now?

Me: Yes, but I’ve had enough baseball for the week. So let’s go with something critically acclaimed, maybe from the ’90s?

LaMDA: Well, you watched Pulp Fiction last week, and also enjoyed Escape at Dannemora, so how about The Shawshank Redemption?

Me: OK, let’s do it 

LaMDA: Great, you can rent it for $3.99 on YouTube, but since you’re subscribed to HBO Max and it’s available there, I’d recommend going that route. Here’s a link.

“In terms of natural language,” Gaurav Nemade, LaMDA’s first product manager, told me, “LaMDA by far surpasses any other chatbot system that I’ve personally seen.” Nemade, who left Google in January, was brimming with potential use cases for LaMDA-like technology. These systems can be useful in education, he said, taking on different personalities to create enriching new possibilities.

Imagine LaMDA teaching a class on physics. It could read up on Isaac Newton, embody the scientist, and then teach the lesson. The students could speak with “Newton,” ask about his three laws, press him on his beliefs, and talk as friends. Nemade said the system even cracks jokes.

When released publicly, these systems may not be traditional chatbots, but avatars with likenesses, personalities, and voices, according to Nemade. “The future that I would envision,” he said, “is not going to be text, it’s not going to be voice, it’s actually going to be multimodal. Where you have video plus audio plus a conversational bot like LaMDA.” We may see these types of experiences debut within three years, he said.

Our interactions with computers today are mediated through interfaces that tech developers built for us to interact with machines. We click and query, and have grown comfortable with this unnatural communication. But developments like LaMDA close the gap between machine and human conversation, and they may enable brand new experiences never before possible.

Some of Lemoine’s critics have said he’s gullibly bought into Google’s marketing. And it’s indeed ironic that a rogue tester has brought greater awareness to LaMDA than any Sundar Pichai Google I/O speech could hope to, even as Google would likely prefer to never hear of him again. Asked if he was a viral marketing ploy, Lemoine said, “I doubt I would have gotten fired if that were the case.”

Still, even those who disagree with Lemoine on the sentience question—as Nemade and I do—understand there’s something there. LaMDA technology is a big leap forward. It has serious downsides, which is why we haven’t seen it in public yet. But when we get LaMDA in our hands, it may well change the way we relate to digital machines.