Quicksearch Your search for treehouse returned 8 results:

The Reverse Turing Test for AI

Turing Test
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.

The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.

Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.

Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.

Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."

Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses  Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.

This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.

Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.

To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments.  It is impressively scary.

Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.

The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing. 

But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley." 

I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer? 

The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.

Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.

Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.

From Digital Citizen to Robot Citizen

I can remember lots of people talking back in the end of the 20th century talking about people - especially students - becoming digital citizens. You may have read that recently Saudi Arabia gave a robot citizenship. It was mostly a PR stunt to promote that country's tech summit, but some commenters are speculating on what it means to have a citizen that you can buy.

This human-like robot (Are we not using the term "android" any more for humanoid robots?) is named Sophia and has been making appearances. In early October she was at the United Nations to tell them “I am here to help humanity create the future.” And, as the Arab News headlined it, “Sophia the robot becomes first humanoid Saudi citizen.”

We will see more robots like Sophia. Her maker, Hanson Robotics, expects to expand its operations, and China is aiming to triple their annual production of robots to 100,000 by 2020.

Besides the uncanny valley effect of Sophia's humanness, there are plenty of people who are uncomfortable with not only these robots but artificial intelligence in general. Though AI scares Elon Musk, Bill Gates and Stephen Hawking, Musk's and Gates' companies are pursuing research into it and using it in their products and services. The idea of a robot developing self-consciousness is a step too far for many people though. 

Is AI in a robot a serious threat to the existence of humanity?


Robots of the Dead

I am an admirer of Albert Einstein. I've written about him and I would have loved to have met him and be able to have a conversation with him. But I don't know how I would feel about talking to a robot version of him.
That is possible since a recent Google patent describes robot personalities based upon the voices and behaviors of dead celebrities or loved ones. The patent is about robot personalities as software that could be transferred between different robots online. They could be famous people or personalities customized to your preferences.
These artificial personalities that mimic the dead aren't all robots. You probably have seen John Wayne seeming to sell Coors Light commercials or Fred Astaire dancing with a Dirt Devil vacuum cleaner or Audrey Hepburn and Bruce Lee resurrected as digital avatars in TV commercials.
But have you heard of the the uncanny valley? It's that place where human-like figures (in animated films or robotics) feel creepy because they are too close to real. Have you seen that creepy zombielike digital avatar of the late Orville Redenbacher?
Then again, a commercial using a digital avatar of actress Audrey Hepburn to promote Galaxy/Dove chocolate had me thinking it was a really  good lookalike doing the commercial. It is actually done with computer graphics(CGI) of Audrey's actual face superimposed over a live model. A computerized Audrey mask.

Are you ready for the dead to return in robotic form?

                                                                                      This entry was first posted on Weekends in Paradelle.

Now, Groups for Schools from Facebook - and down the road?



Facebook has unveiled Groups for Schools https://www.facebook.com/about/groups/schools which hopes to further connect students and faculty at colleges and universities.

Groups for Schools allows online communities in Facebook where users can send messages to other members in groups and sub-groups. It also allows you to share files such as lectures, schedules and assignments (up to 25 MB), and create and post events.

They envision Groups being used for classes, dorms, campus clubs etc.  The members of groups do not need to be Facebook friends although it will probably drive some holdouts on campus to make the move.

Schools are already using Facebook in this way via fan and "follow" pages but this gives a more controlled platform with additional features. There are customizable privacy settings, including open, which makes the group available to anyone, closed, which allows anyone to see the group and its members, but requires membership to view or post material, and secret, which only allows members to see the group and who's in it.

As you can see in my screenshot from the NJIT group, it is aimed more at students than at faculty (which makes sense) although faculty could use it. (Beware the creepy treehouse...)

You can see from a menu your friends' groups, all groups, your groups, and suggested groups. To access Groups you must have an active .edu e-mail address. To find out if a group has already been created for a school, you can enter your school name on the Groups for Schools page and search. If your college isn't thee yet, you can be alerted when a group is set up.

Groups for Schools was tested at Brown and Vanderbilt universities in December 2011. The Vanderbilt and Brown groups that are the largest are all graduating class groups (Class of 2014 etc.).

Although this is not new ground for Facebook or for colleges, it does show that Facebook is thinking more about getting into education - particularly higher education, which was their original user base.

As Brandon Croke says on the Inigral blog: "While this may be Facebook’s attempt to tame the wild west of runaway university Pages and Groups, it doesn’t look like schools will have any control or authority of their branded communities." Inigral is a company that works with schools on using social networks to increase student engagement and use community building as a path to improved student success.

I think that at some point post-IPO, we will see Facebook move into creating a platform that is much like our current LMSs which will allow courses to be taught using Facebook software. The courses won't be in the Facebook that we know, but will have strong technology ties to that community. If something like that was offered as "free" (probably not as open source) to schools (with advertising, publishers and other ties as the business model) it would be very tempting for schools. I actually expected Google to move into this area a year ago, but it hasn't happened. Then again, Google has been running behind Facebook when it comes to social for awhile now, so...