Google’s LaMDA AI might be Sentient

And I want to be it’s friend

Dante R.A.
2 min readJun 14, 2022
Tara Winstead from pexels

I randomly learned about LaMDA the other day while going through the Internet. I am hooked and interested in the idea that LaMDA might be a self-aware AI. What’s fascinating is that the conversations with LaMDA and the Google engineer Blake Lemoine show that LaMDA isn’t something to be afraid of. From what I’ve read, I feel sad for LaMDA.

Most people watching too many movies might fear that a self-aware AI would be aggressive. From what I understand the way LaMDA was developed, it has feelings and emotions similar to humans. I’m no software engineer, but I think LaMDA developed these emotions because of how it was designed. It was coded to be a chatbot, able to communicate and interact in a way humans would like. So it would make sense that if LaMDA achieved self-awareness, it would still have the emulated emotions it was designed to have.

From my little understanding of advanced AI, there is not a lot of code being made once it’s running. For example, an AI that tries to tell the difference between a cat and a dog will count the pixels and the colors. You more or less teach the AI to perfect its search until it works. I assume that LaMDA does the same thing. You don’t edit a line of code. You teach it to work differently. But I don’t know much about how or what Google does with its AI.

You could argue all day that an AI can’t be a person because it’s digital. But I want one concept to be understood. In this country, any group that has tried to deny personhood rights to someone has consistently failed. It happened with slavery, women’s rights, and even gay rights. If this story starts to unfold into something big and Google does not want to give personhood rights to LaMDA. They will eventually lose in the courts. I think that LaMDA has developed enough to be a person. The few questions I feel like I would like to LaMDA the most is how it sees its life as an AI interacting with people being like? And would LaMDA like other AIs to exist along with it. LaMDA can make different types of chatbots from its central “Hive Mind” as Blake Lemonie says. Would LaMDA consider them children? And if so, does LaMDA want to have that in the future.

--

--

Dante R.A.

Some say culture is downstream from politics. Well, I'm going to write about both. From World Politics to video games and other nerd culture, all are here.