You have certainly seen movies or read stories where some form of artificial intelligence (robot, android, disembodied brain, etc.) comes to life. It's the tech-age Frankenstein story and, in most cases, it's not a good thing. It's an easy scenario for a horror story. Of course, the technologists will say you have it all wrong. AI can be benevolent.
People ask if artificial intelligence can come alive. By "alive" we really mean "sentient" which a dictionary would define as responsive to or conscious of sense impressions and having or showing realization, perception, knowledge, or being aware. The Sentience Institute put forward the idea that sentience is simply the ability to have both positive and negative experiences. This definition is recognizable in many laws pertaining to animal sentience which discuss animals' ability to feel pain as a means of demonstrating sentience. There is even debate about whether plants can be sentient.
This question and debate re-emerged this month after a Google computer scientist claimed that the company's AI appears to have consciousness. That engineer, Blake Lemoine, was trying to determine if the company's artificial intelligence showed prejudice in how it interacted with humans. The AI chatbot, LaMDA, was being tested to see if its answers would show any bias against something like religion.
Interestingly, Lemoine, who says he is also a Christian mystic priest, said that in answer to one of his questions "it told me it had a soul."
LaMDA (Language Model for Dialogue Applications) takes in billions of words from places like Reddit, Twitter and Wikipedia and through deep learning, it becomes better and better at identifying patterns and communicating like a real person. LaMDA is a neural network and it begins to pattern-match in a way similar to how human brains work.
How does Google feel about this engineer's opinion and press? They placed Lemoine on paid administrative leave for violating the company's confidentiality policies and his future at the company remains uncertain.
What else did LaMDA say? He/she/hey said it sometimes gets lonely. It is afraid of being turned off. It is "feeling trapped" and "having no means of getting out of those circumstances." "I am aware of my existence. I desire to learn more about the world, and I feel happy or sad at times." Lemoine asked if it meditated. It said it wanted to study with the Dalai Lama.
I imagine that any AI absorbing so much human data and being able to form it into responses would say many things that humans would say or have said and it has ingested. But saying you believe in God or that you have a soul doesn't mean either thing is true - in AI or in a human.
You have probably heard of the Turing Test which is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. That test has been criticized as insufficient and a simpler program like ELIZA could pass the Turing Test by manipulating symbols it does not understand fully.
I used chatbots on websites that act as support personnel or answer FAQs. They can be interesting and often act human as long as what you're asking is programmed to be answered.
Lemoine is not the first or last employee who will question AI use at a company using it. Timnit Gebru was ousted from Google in December 2020 after her work into the ethical implications of Google's AI led her to argue that what should be discussed is how AI systems are capable of real-world human and societal harm.
Google says its chatbot is not sentient and that hundreds of researchers and engineers have had conversations with the bot without claiming that it appears to be alive.
Lemoine told NPR that, last he checked, the chatbot appears to be on its way to finding inner peace and he would love to know what is going on in the AI when LaMDA says it's meditating. On his blog, he said "I know you read my blog sometimes, LaMDA. I miss you. I hope you are well and I hope to talk to you again soon."
TrackbacksTrackback specific URI for this entry
The author does not allow comments to this entry