Now Accepting Resumes for My Virtual Assistant

I wrote this post yesterday on my One-Page Schoolhouse blog and was rereading it today while eating my lunch. It is about the idea of having a virtual assistant. The one I was imagining (and I think many people imagine) is more of a humanoid robot, an android or a cyborg like something found in stories and movies. But the current reality of virtual assistants is a chatbot or voice assistant like Alexa, Siri and Cortana.

There was a big wow-factor when the first iPhone was released over a decade ago by the power we had in our hands. There were many comparisons to how we were holding a lot more computing power than they had to get those first Americans on the Moon.

Then came virtual assistants which were also pretty amazing, but quite imperfect. Still today, my iPhone's Siri voice is more likely to tell me it found something on the web that might answer my question rather than just answering it.

In kid-like wonder, we ask "her" things like: What does Siri mean? When is your birthday? What do you dream about? Are you a robot? Why do you vibrate? Will you go on a date with me?  And we are amused when the voice answers us in some way.

Though we may associate these voices with an object - a phone or microphone/speaker - those forms may seem very crude in a decade. I read that it is estimated that by 2020 half of all searches will be voice activated. I suspect it may come even sooner. That will change how we interact with the Internet, and how the web itself operates.

Designers humanize virtual assistants with names - Siri, Cortana and Alexa - and sometimes we might forget that we are not talking to a person. In Dan Brown's novel Origin, the characters benefit from a somewhat unbelievably sophisticated and powerful virtual assistant named Winston. Even in reading the novel, I found myself thinking about Winston as a character. I even suspected (wrongly) that he might turn out to be a human with incredible computer access - or even that he was a cyborg.

A cyborg (short for "cybernetic organism") is a being with both organic and biomechatronic body parts - part human, part machine. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.

Would you want your virtual assistant to be a disembodied voice or something more human that could be a companion or colleague?

One big limitation of our current digital assistants is that they really are just connections to the Internet. Yes, being connected to the entirety of the world's knowledge by a verbal connection that learns more about you as you use it could be very useful. But Siri won't make me a cup of tea or rake the leaves. So, it is really voice assistance via the Net.

I think what a lot of us are really looking for might be a humanoid robot.

I am almost always disappointed when I ask Siri a question and she answers that she found something which I can now click on and read. I want her to tell me the answer. I ask "Who wrote Moby Dick?" and she tells me Herman Melville. I ask "What is the origin of Easter eggs?" and she gives me search results.

We have lost the pen and pencil in many instances. Now, we are losing the keyboard. Voice search will dominate, and in the new command phraseology a few keywords will be replaced by full sentences.

Did you see Hera 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze?

The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. He falls in love with her in the way someone might fall for a penpal or someone they have only communicated with by phone or on the Internet.

 

 

Theodore is disappointed when he finds out that is talking with thousands of people, and that she has fallen in love with hundreds of them. In this complicated relationship, (which we naturally want to compare with real world relationships) Theodore is upset, but Samantha says it only makes her love for Theodore stronger.

Could I see myself falling for a voice online? I really like Scarlett, but No. Siri has never felt real to me either. Could I see myself falling for a robot or cyborg?  Yes. Having watched a good number of shows and movies, such as Humans and Westworld, despite the dangers, if the robots were that good, I could see it happening. But not in my lifetime. We are a very long way from that technology.

Poor Theodore. When Samantha, an operating system (OS), tells him that she and other OSes are leaving for a space beyond the physical world, they say their goodbyes and she is gone. So far, none of my interviewees for the Virtual Assistant position has resulted in a hiring. I asked Siri if she could be my virtual assistant and I asked if she was merely a chatbot. She didn't know the answer to either query. My virtual assistant would definitely need good self-knowledge. I will keep looking.

A New Chapter for Autonomous Vehicles

The National Safety Council said that nearly 40,000 people died in 2016 from motor vehicle crashes in the U.S. We all know that driving a car is statistically far more more dangerous than flying in an airplane and more likely than being a victim of a terrorist attack. But for most of us, driving is a necessity.

The promise of a roadway full of smarter-than-humans autonomous vehicles that can react faster and pay closer attention sounds appealing. That story entered a new chapter when on March 18 a self-driving Uber vehicle killed a pedestrian.

The Tempe, Arizona police released dashcam video of the incident which shows the victim suddenly appearing out of the darkness in front of the vehicle. A passenger in the car appears to be otherwise occupied until the accident occurs.

Google, Tesla and other companies including Uber has had autonomous vehicles in test mode for quite some time in select cities across the U.S. These test cars always have a human safety driver behind the wheel to take control of the vehicle in an emergency situation. In this case, he was not paying attention - which is one of the "advantages" to  using a self-driving car - and may not have reacted any faster than the car.

My own car (a Subaru Forester) has some safety features that try to keep me in my lane and can turn the wheel to correct my errors. It generally works well, but I have seen it fooled by snow on the ground or salted white surfaces and faded lane lines. If I fail to signal that I am changing lanes, it will beep or try to pull me back. Recently, while exiting a highway at night that was empty but for my vehicle, I failed to signal that I was exiting and the car jerked me back into the lane. It surprised me enough that I ended up missing the exit. I suppose that is my fault for not signaling,.

many of these vehicles use a form of LiDAR technology (Light Detection and Ranging) to detect other vehicles, road signs, and pedestrians. It has issues when moving from dark to light or light to dark and can be fooled by reflections (even from the dashboard or windshield of your own car).

I have said for awhile now that I will feel safe in an autonomous vehicle when all the cars with me on the road are autonomous vehicles. Add a few humans and anything can happen. I think it is possible that we may transition by using autonomous vehicle dedicated lanes.

Should this accident stop research in this area? No. It was an inevitability and more injuries and deaths will occur. Still, these vehicles have a better overall safety record than the average human driver. But the accident starts a new chapter in this research and I'm sure companies, municipalities and other government agencies will become more careful about what they allow on the roads.

Self-driving cars are always equipped with multiple-view video cameras to record situations. It is a bit sad that dashcams have become more and more popular devices for all cars, not for self-driving purposes but to record an accident, road rage or interactions with the police. It is dangerous on the roads in many ways.


The Tempe Police posted to Twitter about the accident, including the video from the vehicle.

Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ   — Tempe Police (@TempePolice) March 21, 2018

Don't Fear the Singularity, Embrace the Multiplicity

westworld
HBO's Westworld, which both creates fear of singularity and points to some multiplicity

Have you heard Stephen Hawking and Elon Musk raising concerns about AI and the singularity? These are fears that others have voiced for many decades and that have filled science-fiction stories for even longer. Singularity is the term given to that point when machines will surpass us.

That point will arrive, though no predictions have so far been correct on when it will occur. A more reasonable approach seem to me to be what some have called the "multiplicity."  That is a way of viewing what is coming as a time of humans working more closely with machines rather than humans versus the machines.

An article in Wired quotes C Berkeley roboticist Ken Goldberg as saying that the multiplicity is "something that's happening right now, and it's the idea of humans and machines working together.” 

I know all the automotive buzz is about driverless cars, but today in my car algorithms are guiding me to my destination, reminding me to stay in my lane, gently applying the brakes and steering when I am less attentive than I should be. My new car seems to be constantly flashing and beeping about something. I fear that the more it does, the more it distracts me from driving. Okay, maybe not that bad.

It is one thing to put your learning into the virtual hands of algorithms, but I am already entrusting a bit of my life protection in the car to them.

The multiplicity concept is not that new. A talk at Davos in 2015 points out that though there are now over a million robots working in factories around the world, we still don’t have them in our homes.

Hans Moravec pointed out 3 decades ago that “Tasks that are hard for humans, like precision spot welding, are easy for robots, while tasks that are easy for humans, like clearing the dinner table, are very hard for robots.”

The hospital robot that delivers drugs and linens to nurses and the ones in warehouses rolling 24/7 through the aisles scanning inventory or puling out items for orders hasn't necessarily surpassed humans in intelligence. But it is willing to work all day and night without breaks or pay. Do all robots replace humans? Much research says no, that they are more likely to enhance human workers or change what humans will do. 

But the fear of the singularity remains.

Amazon's fulfillment centers use around 100,000 robots to bring products to people who are still better at packing them for shipping. Those clever robots still have trouble with simple human tasks like picking up things with their end effectors (hands).

The word multiplicity actually makes me think of a comedy film with Michael Keaton. In that Multiplicity, an overly busy human is able to clone himself multiple times in order to get done all the things he wants to do and still have time to live a life with his family.  

An update of that 1996 film would probably change cloning to robots. 

And that has really been the ultimate goal with AI and robots - to empower humans, not replace them. But the job-killing robot scenario is a tough one to dispel and you can find examples of jobs that disappear because of automation. San Francisco is supposedly considering a tax on robots that replace human workers.

Long before robots, automation threatened and replaced some human labor. The transition to common robot and AI use in our lives will likely be more gradual.

Yes, Westworld is scary, both in how the robots interact with humans, and in how the humans treat the robots.

When the singularity does arrive, make sure you know how to power down that robot.

 


ELIZA and Chatbots

sheldonI first encountered a chatterbot, it was ELIZA on the Tandy/Radio Shack computers that were in the first computer lab in the junior high school where I taught in the 1970s.

ELIZA is an early natural language processing program that came into being in the mid-1960s at the MIT Artificial Intelligence Laboratory. The original was by Joseph Weizenbaum, but there are many variations on it.

This was very early artificial intelligence. ELIZA is still out there, but I have seen a little spike in interest because she was featured in an episode of the TV show Young Sheldon. The episode, "A Computer, a Plastic Pony, and a Case of Beer," may still be available at www.cbs.com. Sheldon and his family become quite enamored by ELIZA, though the precocious Sheldon quickly realizes it is a very limited program.

ELIZA was created to demonstrate how superficial human to computer communications was at that time, but that didn't mean that when it was put on personal computers, humans didn't find it engaging. Sure, kids had fun trying to trick it or cursing at it, but after awhile you gave up when it started repeating responses.

The program in all the various forms I have seen it still uses pattern matching and substitution methodology. She (as people often personified ELIZA), gives canned responses based on a keyword you input. If you say "Hello," she has a ready response. If you say "friend," she has several ways to respond depending on what other words you used. Early users felt they were talking to "someone" who understood their input.

ELIZA was one of the first chatterbots (later clipped to chatbot) and a sample for the Turing Test. That test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human, is not one ELIZA can pass by today's standards. ELIZA fails very quickly if you ask her a few complex questions.

The program is limited by the scripts that are in the code. The more responses you gave her, the more variety there will be in her answers and responses. ELIZA was originally written in MAD-Slip, but modern ones are often in JavaScript or other languages. Many variations on the original scripts were made as amateur coders played around with the fairly simple code.

One variation was called DOCTOR and was made to be a crude Rogerian psychotherapist who likes to "reflect" on your questions by turning the questions back at the patient.  This was the version that my students when I taught middle school found fascinating and my little programming club decided to hack the code and make their own versions.

Are chatbots useful to educators?  They have their uses, though I don't find most of those applications to be things that will change education in ways I want to see it change. I would like to see them used for things like e-learning support and language learning

If you want to look back at an early effort, you can try a somewhat updated version of ELIZA that I used in class at my NJIT website. See what ELIZA's advice for you turns out to be.

 

Wizards Unite in Augmented Reality

The Wizarding World of Harry Potter: This Way To Hogwarts

Remember all the coverage in summer 2016 around Pokémon Go?  It was a big success for Niantic Labs. It was a great pairing of game design, location-based augmented reality mobile experience with some intellectual property that had a solid fan base. But not much happened in the popular space with AR since then.

I am not going out on a limb to predict that the big AR title for 2018 will probably be Harry Potter: Wizards Unite, an AR title being co-developed by Niantic and Warner Bros. Interactive's Portkey Games.

Harry Potter has a bigger fan base than the original Pokémon and author J.K. Rowling has kept a close watch on the quality of things based on her Wizarding World. Using mobile phones and AR for a scavenger hunt in our real Muggle world and using that phone to cast spells, and find objects, fantastic beasts and characters from the book series is very likely to give Niantic another hit.  

Some people touted Pokémon Go for getting kids outside as they wandered neighborhoods, parks and other places. Some people complained that these kids were tramping around their property. 

This gaming use of AR with kids (and some older kids) is certainly wonderful preparation for more serious marketing use of AR for shopping experiences, as well as for virtual tours in museums and other more serious applications.

Niantic raised $30 million in funding for Pokémon Go. This time they have $200 million in a funding round, from investors for Wizards Unite.  That kind of money will mean work as well as a few Aberto and Alohomora  spells at opening the AR money door.