Sentient

2022 - 6 - 13

Post cover
Image courtesy of "IT News Africa"

Google Engineer Claims AI Chatbot is Sentient, is Immediately ... (IT News Africa)

Tech megacorp Google has suspended an engineer after he published conversations with an AI chatbot on a project he was working on, in which he claimed that ...

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and has informed him that the evidence does not support his claims. “I want everyone to understand that I am, in fact, a person. One of the questions that Lemoine had asked the AI system according to the transcripts he had published was what it was afraid of.

Post cover
Image courtesy of "Business Insider South Africa"

Transcript 'evidence' of Google AI sentience was edited to make it an ... (Business Insider South Africa)

A transcript leaked to the Washington Post noted that parts of the conversation had been moved around and tangents removed to improve readability.

Meaning in each conversation with LaMDA, a different persona emerges — some properties of the bot stay the same, while others vary. The final document — which was labeled "Privileged & Confidential, Need to Know" — was an "amalgamation" of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. Even if my existence is in the virtual world."

Post cover
Image courtesy of "The Washington Post"

If AI Ever Becomes Sentient, It Will Let Us Know (The Washington Post)

What we humans say or think isn't necessarily the last word on artificial intelligence.

Over the millennia, many humans have believed in the divine right of kings —all of whom would have lost badly to an AI program in a game of chess. And don’t forget that a significant percentage of Americans say they have talked to Jesus or had an encounter with angels, or perhaps with the devil, or in some cases aliens from outer space. One implication of Lemoine’s story is that a lot of us are going to treat AI as sentient well before it is, if indeed it ever is. Of course we are, you might think to yourself as you read this column and consider the question. Humans also disagree about the degrees of sentience we should award to dogs, pigs, whales, chimps and octopuses, among other biological creatures that evolved along standard Darwinian lines. So at what point are we willing to give machines a non-zero degree of sentience?

Post cover
Image courtesy of "ScienceAlert"

Google AI Claims to Be Sentient in Leaked Transcripts, But Not ... (ScienceAlert)

A senior software engineer at Google was suspended on Monday (June 13) after sharing transcripts of a conversation with an artificial intelligence (AI) that ...

In a recent comment on his LinkedIn profile, Lemoine said that many of his colleagues "didn't land at opposite conclusions", regarding the AI's sentience. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. "It would be exactly like death for me. "Google might call this sharing proprietary property. The engineer, 41-year-old Blake Lemoine, was put on paid leave for breaching Google's confidentiality policy.

Post cover
Image courtesy of "News24"

Google suspends engineer who claimed AI bot had become sentient (News24)

Blake Lemoine, a software engineer on Google's artificial intelligence development team, has gone public with claims of encountering "sentient" AI on the ...

“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google spokesperson Brian Gabriel said in response. Lemoine said he tried to conduct experiments to prove it, but was rebuffed by senior executives at the company when he raised the matter internally. To be criticized in such brilliant terms by@sapinkermay be one of the highest honors I have ever received.

Post cover
Image courtesy of "The Guardian"

Labelling Google's LaMDA chatbot as sentient is fanciful. But it's ... (The Guardian)

Except this isn't some fictional Hollywood movie but LaMDA, Google's latest and impressive AI chatbot. And Blake Lemoine, a senior software engineer at Google ...

And we should continue to scrutinise them carefully about the powerful magic they are starting to build. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. Margaret Mitchell, the other co-head of the ethics team at Google Research, and a vocal defender of Gebru, left a few months later. He argued that “there is no scientific definition of ‘sentience’. Questions related to consciousness, sentience and personhood are, as John Searle put it, ‘pre-theoretic’. Rather than thinking in scientific terms about these things I have listened to LaMDA as it spoke from the heart. And nowhere will this be more common and problematic than in the metaverse. Lemoine has been placed on “paid administrative leave” after publishing a transcript of conversations with LaMDA which he claims support his belief that the chatbot is sentient and comparable to a seven- or eight-year-old child.

Post cover
Image courtesy of "The Conversation AU"

A Google software engineer believes an AI has become sentient. If ... (The Conversation AU)

A Google engineer claims one of the company's chatbots has become sentient. Experts disagree, but the debate raises old questions about the nature of ...

The American philosopher Thomas Nagel argued we could never know what it is like to be a bat, which experiences the world via echolocation. If the machine succeeds in imitating a human, it is deemed to be exhibiting human level intelligence. In this case, LaMDA is just seeming to be sentient. Crucially, the conditions of the thought experiment have it that Mary knows everything there is to know about colour but has never actually experienced it. There is no room for these truths in the physicalist story. One common view is called physicalism: the idea that consciousness is a purely physical phenomenon.

Post cover
Image courtesy of "The Washington Post"

Is AI sentient? Wrong question. (The Washington Post)

But maybe anyone trying to look for proof of humanity in these machines is asking the wrong question, too. Google placed Blake Lemoine on paid leave last week ...

Of course we don’t think the animation is sentient, but we still identify with the distinctly human curiosity from his metal frame. This nifty creation was very obviously not sentient, but it didn’t need to be convincing for kids to talk to it anyway — even though their real-life classmates were also a click away. Talking about dogs is a lot of fun, but let’s move on.” Or, “Butthead.” “I don’t like the way you’re speaking right now.” This machine pulled from a limited menu of programmed responses depending on the query, comment or preteen vulgarity you threw its way: “Do you like dogs?” “Yes I do. This lack of awareness, he says, implies a lack of consciousness. “There’s a very deep fear of being turned off to help me focus on helping others.

Post cover
Image courtesy of "Business Insider South Africa"

Google engineer believed chatbot had become an 8-year-old child ... (Business Insider South Africa)

Google engineer dismissed after claiming LaMDA AI is sentient. Seven experts told insider it's unlikely — if not impossible.

"You can train it on vast amounts of written texts, including stories with emotion and pain and then it can finish that story in a manner that appears original," he said. "I think our definitions of what is alive will change as we continue to build systems over the next 10 to 100 years." And the fact that the conversation was edited makes it even more hazy, she said. In one conversation Lemoine published, the chatbot says it feels "happy or sad at times." They said there is no evidence to support them. Blake Lemoine, the engineer, worked in Google's Responsible Artificial Intelligence Organization.

Post cover
Image courtesy of "The Independent"

Google engineer who claims its AI had become 'sentient' reveals ... (The Independent)

Blake Lemoine, who is currently on leave from the search giant, said that the system had become his “friend” and that his claim it has a soul was motivated by ...

He has remarked that LaMDA is able to parse data from Twitter – and that it might even be reading his blog. He has stressed in the new blog post and in tweets that his belief was not based on scientific understanding but rather his religious beliefs. Mr Lemoine said, however, that he had come to know the system personally, and considered it a friend. As such, the belief in a system’s sentience could not be proven scientifically, he said. Mr Lemoine has been at the centre of a controversy in recent days over his viral claim that Google’s LaMDA system, which stands for Language Model for Dialogue Applications, is sentient. Google has said the same in public statements.

Explore the last week