The Reverse Turing Test for AI

Turing Test
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.

The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.

Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.

Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.

Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."

Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses  Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.

This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.

Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.

To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments.  It is impressively scary.

Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.

The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing. 

But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley." 

I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer? 

The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.

Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.

Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.

Is Your Website GDPR Ready?


                                    What is the GDPR? from Evidon on Vimeo 

What is GDPR? GDPR is the General Data Protection Regulation. It is a European privacy law approved by the European Commission in 2016 which is designed to unify and regulate EU residents’ control of their personal data. It is set to replace Directive 95/46/EC and will be enforced by May 25, 2018.

What does it mean for you if you are website owner? Well, if you collect personal data via webforms especially from people who live in the European Union, you'll need to make your website compliant to this regulation by May 25, 2018. It is also important that you update your site's Privacy Policy to cover all personal information that are being collected through your site.

What if you don't operate in the EU? Well, you may think you are outside the EU, but do you get visitors from the EU?  Aren't all websites "global" by default?

MORE INFORMATION

https://www.eugdpr.org/ https://en.wikipedia.org/wiki/General_Data_Protection_Regulation

https://www.codeinwp.com/blog/complete-wordpress-gdpr-guide/

Revealing Photos

camera phoneYou're probably tired of stories about privacy, Facebook and social media. But in the midst of all that the past few months, I continue to see lots of my online friends taking quizzes, liking posts and especially uploading photos.

Oh, what's the harm in posting a photo?

Your camera or phone adds a lot of data to a photo file. Especially with your camera's phone (on Flickr and many photo sharing sites, the most popular "camera" is a phone) you are sharing your location, the date and time, the kind of device you used and its device ID and your mobile provider. It will also ping off any nearby Wi-Fi spots or cell towers, so your location is there even if you don't add that to the image post.

Add in facial recognition, which Facebook and Google use on your photos, and features will try to determine who is in that photo. If you tagged anyone or captioned the photo or added a new specific location, you are feeding the database. Thanks, users!

Think about how this data along with knowing who your friends are and their data and where you go with or without them and it builds a very robust picture of you and your world.

Can't this be controlled by us? To a degree, yes, but not totally. Your phone and some cameras will automatically record that data for every shot. You can turn off location services/geotagging in some instances, but I'm not even convinced that the data still isn't there anyway. And if you are automatically backing up your photos to iCloud or Google or somewhere in the cloud, I'm not positive that even your deleted photos are forever gone along with their metadata.

Am I overly paranoid? Can anyone be overly paranoid about privacy these days?

 

Was My Facebook Data Compromised?

Roughly 87 million people had their Facebook data stolen by the political research firm Cambridge Analytica. 

On April 10 and 11 Mark Zuckerberg testified before Congress. The reviews were mixed. Some said he was robotic and evasive. I thought he did a good job in the face of some ignorant questions by people who clearly don't understand Facebook, social media or modern technology - and even mispronounced Zuckerberg's name several different ways.

The day before the hearings Facebook finally notified the people who had their information grabbed by Cambridge Analytica. It is supposed to be about 70 million Americans and other users in the UK, Indonesia, and the Philippines. 

I saw the notification at the top of my Facebook newsfeed when I logged in. There was also a button for changing my privacy settings. Probably everyone, even if your information wasn’t captured and used by Cambridge Analytica, you should check and tighten up those settings.

How can you tell if your data was shared with Cambridge Analytica? Here is the link: https://www.facebook.com/help/1873665312923476 

What did Facebook tell me? 

"Based on our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, Page likes, birthday and current city. A small number of people who logged into "This Is Your Digital Life" also shared their own News Feed, timeline, posts and messages which may have included posts and messages from you. They may also have shared your hometown."

One of the questions that Zuckerberg was asked was about the fact that Cambridge Analytica wasn’t the only company that was misusing Facebook data. The company suspended at least two more research companies before the hearings: CubeYou was also misusing data from personality quizzes, along with AggregateIQ. 

After a rash of people saying they were quitting Facebook and the stock taking a hit, during the hearings the stock rebounded and I am seeing less talk about quitting. Though there are plenty of social networks, none has all the features of Facebook and has been able to hold a large user base. One Senator asked if Facebook is a monopoly. Zuckerberg said No, but was unable to really give an example of a major competitor. Yes, they overlap with networks like Twitter and their own Instagram, but no one really does it all.

Zuckerberg made the point repeatedly that Facebook has already made many positive changes since the Cambridge Analytica breach an is still doing them now ahead of any possible regulation by Congress. Are all the issues corrected? No. Are things better with Facebook and privacy? Yes. Will it or some competitor ever be the perfect social network? No way.

A New Chapter for Autonomous Vehicles

The National Safety Council said that nearly 40,000 people died in 2016 from motor vehicle crashes in the U.S. We all know that driving a car is statistically far more more dangerous than flying in an airplane and more likely than being a victim of a terrorist attack. But for most of us, driving is a necessity.

The promise of a roadway full of smarter-than-humans autonomous vehicles that can react faster and pay closer attention sounds appealing. That story entered a new chapter when on March 18 a self-driving Uber vehicle killed a pedestrian.

The Tempe, Arizona police released dashcam video of the incident which shows the victim suddenly appearing out of the darkness in front of the vehicle. A passenger in the car appears to be otherwise occupied until the accident occurs.

Google, Tesla and other companies including Uber has had autonomous vehicles in test mode for quite some time in select cities across the U.S. These test cars always have a human safety driver behind the wheel to take control of the vehicle in an emergency situation. In this case, he was not paying attention - which is one of the "advantages" to  using a self-driving car - and may not have reacted any faster than the car.

My own car (a Subaru Forester) has some safety features that try to keep me in my lane and can turn the wheel to correct my errors. It generally works well, but I have seen it fooled by snow on the ground or salted white surfaces and faded lane lines. If I fail to signal that I am changing lanes, it will beep or try to pull me back. Recently, while exiting a highway at night that was empty but for my vehicle, I failed to signal that I was exiting and the car jerked me back into the lane. It surprised me enough that I ended up missing the exit. I suppose that is my fault for not signaling,.

many of these vehicles use a form of LiDAR technology (Light Detection and Ranging) to detect other vehicles, road signs, and pedestrians. It has issues when moving from dark to light or light to dark and can be fooled by reflections (even from the dashboard or windshield of your own car).

I have said for awhile now that I will feel safe in an autonomous vehicle when all the cars with me on the road are autonomous vehicles. Add a few humans and anything can happen. I think it is possible that we may transition by using autonomous vehicle dedicated lanes.

Should this accident stop research in this area? No. It was an inevitability and more injuries and deaths will occur. Still, these vehicles have a better overall safety record than the average human driver. But the accident starts a new chapter in this research and I'm sure companies, municipalities and other government agencies will become more careful about what they allow on the roads.

Self-driving cars are always equipped with multiple-view video cameras to record situations. It is a bit sad that dashcams have become more and more popular devices for all cars, not for self-driving purposes but to record an accident, road rage or interactions with the police. It is dangerous on the roads in many ways.


The Tempe Police posted to Twitter about the accident, including the video from the vehicle.

Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ   — Tempe Police (@TempePolice) March 21, 2018