The Reverse Turing Test for AI

Turing Test
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.

The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.

Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.

Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.

Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."

Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses  Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.

This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.

Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.

To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments.  It is impressively scary.

Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.

The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing. 

But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley." 

I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer? 

The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.

Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.

Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.

Now Accepting Resumes for My Virtual Assistant

I wrote this post yesterday on my One-Page Schoolhouse blog and was rereading it today while eating my lunch. It is about the idea of having a virtual assistant. The one I was imagining (and I think many people imagine) is more of a humanoid robot, an android or a cyborg like something found in stories and movies. But the current reality of virtual assistants is a chatbot or voice assistant like Alexa, Siri and Cortana.

There was a big wow-factor when the first iPhone was released over a decade ago by the power we had in our hands. There were many comparisons to how we were holding a lot more computing power than they had to get those first Americans on the Moon.

Then came virtual assistants which were also pretty amazing, but quite imperfect. Still today, my iPhone's Siri voice is more likely to tell me it found something on the web that might answer my question rather than just answering it.

In kid-like wonder, we ask "her" things like: What does Siri mean? When is your birthday? What do you dream about? Are you a robot? Why do you vibrate? Will you go on a date with me?  And we are amused when the voice answers us in some way.

Though we may associate these voices with an object - a phone or microphone/speaker - those forms may seem very crude in a decade. I read that it is estimated that by 2020 half of all searches will be voice activated. I suspect it may come even sooner. That will change how we interact with the Internet, and how the web itself operates.

Designers humanize virtual assistants with names - Siri, Cortana and Alexa - and sometimes we might forget that we are not talking to a person. In Dan Brown's novel Origin, the characters benefit from a somewhat unbelievably sophisticated and powerful virtual assistant named Winston. Even in reading the novel, I found myself thinking about Winston as a character. I even suspected (wrongly) that he might turn out to be a human with incredible computer access - or even that he was a cyborg.

A cyborg (short for "cybernetic organism") is a being with both organic and biomechatronic body parts - part human, part machine. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.

Would you want your virtual assistant to be a disembodied voice or something more human that could be a companion or colleague?

One big limitation of our current digital assistants is that they really are just connections to the Internet. Yes, being connected to the entirety of the world's knowledge by a verbal connection that learns more about you as you use it could be very useful. But Siri won't make me a cup of tea or rake the leaves. So, it is really voice assistance via the Net.

I think what a lot of us are really looking for might be a humanoid robot.

I am almost always disappointed when I ask Siri a question and she answers that she found something which I can now click on and read. I want her to tell me the answer. I ask "Who wrote Moby Dick?" and she tells me Herman Melville. I ask "What is the origin of Easter eggs?" and she gives me search results.

We have lost the pen and pencil in many instances. Now, we are losing the keyboard. Voice search will dominate, and in the new command phraseology a few keywords will be replaced by full sentences.

Did you see Hera 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze?

The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. He falls in love with her in the way someone might fall for a penpal or someone they have only communicated with by phone or on the Internet.

 

 

Theodore is disappointed when he finds out that is talking with thousands of people, and that she has fallen in love with hundreds of them. In this complicated relationship, (which we naturally want to compare with real world relationships) Theodore is upset, but Samantha says it only makes her love for Theodore stronger.

Could I see myself falling for a voice online? I really like Scarlett, but No. Siri has never felt real to me either. Could I see myself falling for a robot or cyborg?  Yes. Having watched a good number of shows and movies, such as Humans and Westworld, despite the dangers, if the robots were that good, I could see it happening. But not in my lifetime. We are a very long way from that technology.

Poor Theodore. When Samantha, an operating system (OS), tells him that she and other OSes are leaving for a space beyond the physical world, they say their goodbyes and she is gone. So far, none of my interviewees for the Virtual Assistant position has resulted in a hiring. I asked Siri if she could be my virtual assistant and I asked if she was merely a chatbot. She didn't know the answer to either query. My virtual assistant would definitely need good self-knowledge. I will keep looking.