"Human-centered design paired with human-centric AI is key to the future of work," says Sara Ortloff Khoury, Director of UX Design at Google Together. That is a group that considers the "critical user journeys of what people do every day." That often means looking at tasks that humans don’t want to do, but automation can do. That might mean that an invite in your email is added to your calendar automatically or a contact you email frequently is moved up in priority, and now it also has to do with looking for a job or looking for a new employee.
As part of "designing the future of work," Khoury's team developed 3 foundational human-centered design AI-for-enterprise principles. The principles are:
1. enhance what people can achieve at work
2. anticipate what people need at work
3. reduce bias and increase opportunities
I keep reading predictions by technologists and educators that say things like that by 2030 one-third of jobs will require skills that aren’t common or even don’t exist today.
What the Google folks did in search was put job search directly on Google Search. That means that a search will produce up-to-date job descriptions as well as information about companies, salaries, commute-times, and more. I tested that by simply searching on jobs blogger and it resulted in 100 jobs with the ones nearest me at the top.
They have also made Cloud Talent Solution, which offers plug-and-play access to Google’s AI and search capabilities for large companies to find talent. (They report that job boards like CareerBuilder, and employers like Johnson and Johnson already use it.).
They also launched about a year ago Hire, which is a recruiting app that integrates with G Suite and is suited to small and medium-sized businesses.
You can argue about the good and bad of AI, but there is no argument that artifical intelligence is here and it is affecting jobs. Elon Musk says he fears where AI will ultimately lead, but he uses AI in his Tesla vehicles.
I keep hearing that AI will free humans of boring drudgery jobs and give us more free time. Then, we can do human work rather than machine work. I also hear that for all the jobs lost to AI there will be at least half as many new ones created.
The book Human + Machine: Reimagining Work in the Age of AI, examines organizations that deploy AI systems — from machine learning to computer vision to deep learning. The authors found that AI systems are augmenting human capabilities and enabling people and machines to work collaboratively, changing the very nature of work and transforming businesses.
A symbiosis between man and machine is not here now, but it is already being called the third wave of business transformation and it concerns using adaptive processes. (The first wave was standardized processes, and the second was automated processes.)
Even when AI has advanced and humans and machines are symbiotic partners, humans will be needed. In this book, they identify three broad types of new jobs in the "missing middle" of the third wave.
Trainers will be needed to teach AI systems how they should perform, helping natural-language processors and language translators make fewer errors and teaching AI algorithms how to mimic human behaviors.
Explainers will be needed to bridge the gap between technologists and business leaders, explaining the inner workings of complex algorithms to nontechnical professionals.
The third category of jobs will involve sustainers who will ensure that AI systems are operating as designed. They might be in roles such as context designers, AI safety engineers and ethics compliance managers. For example, a safety engineer will try to anticipate and address the unintended consequences of an AI system, and a ethics complaince manager acts as ombudsmen for upholding generally accepted human values and morals.
And for education? These jobs will require new skills. The skills the authors describe all sound unfamiliar, as I suppose they should. Arewe ready to teach Rehumanizing Time, Responsible Normalizing, Judgment Integration, Intelligent Interrogation, Bot-Based Empowerment, Holistic Melding, Reciprocal Apprenticing, and Relentless Reimagining.
Their labels may be unfamiliar, but the skills can also be seen as extensions or advancements of more familiar ones in a new contect.
For example, "Judgment Integration" is needed when a machine is uncertain about what to do or lacks necessary business or ethical context in its reasoning model. A human ready to be smart about sensing where, how and when to step in means that human judgment will still be needed even in a reimagined process.
Imagine that autonomus vehicle approaching at high speed a deer and a child in the road ahead. It needs to swerve, but it will also need to hit one of them in its avoidance maneuver. Which would it choose? The decision will not be based on how we feel about a child versus a wild animal - unless a human has been involved in the process earlier.
Read more in the book and in "What Are The New Jobs In A Human + Machine World?" by Paul R. Daugherty and H. James Wilson on forbes.com
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.
The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.
Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.
Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.
Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.
Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.
Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."
Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.
This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.
Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.
To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments. It is impressively scary.
Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.
The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing.
But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley."
I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer?
The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.
Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.
Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.
I wrote this post yesterday on my One-Page Schoolhouse blog and was rereading it today while eating my lunch. It is about the idea of having a virtual assistant. The one I was imagining (and I think many people imagine) is more of a humanoid robot, an android or a cyborg like something found in stories and movies. But the current reality of virtual assistants is a chatbot or voice assistant like Alexa, Siri and Cortana.
There was a big wow-factor when the first iPhone was released over a decade ago by the power we had in our hands. There were many comparisons to how we were holding a lot more computing power than they had to get those first Americans on the Moon.
Then came virtual assistants which were also pretty amazing, but quite imperfect. Still today, my iPhone's Siri voice is more likely to tell me it found something on the web that might answer my question rather than just answering it.
In kid-like wonder, we ask "her" things like: What does Siri mean? When is your birthday? What do you dream about? Are you a robot? Why do you vibrate? Will you go on a date with me? And we are amused when the voice answers us in some way.
Though we may associate these voices with an object - a phone or microphone/speaker - those forms may seem very crude in a decade. I read that it is estimated that by 2020 half of all searches will be voice activated. I suspect it may come even sooner. That will change how we interact with the Internet, and how the web itself operates.
Designers humanize virtual assistants with names - Siri, Cortana and Alexa - and sometimes we might forget that we are not talking to a person. In Dan Brown's novel Origin, the characters benefit from a somewhat unbelievably sophisticated and powerful virtual assistant named Winston. Even in reading the novel, I found myself thinking about Winston as a character. I even suspected (wrongly) that he might turn out to be a human with incredible computer access - or even that he was a cyborg.
A cyborg (short for "cybernetic organism") is a being with both organic and biomechatronic body parts - part human, part machine. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.
Would you want your virtual assistant to be a disembodied voice or something more human that could be a companion or colleague?
One big limitation of our current digital assistants is that they really are just connections to the Internet. Yes, being connected to the entirety of the world's knowledge by a verbal connection that learns more about you as you use it could be very useful. But Siri won't make me a cup of tea or rake the leaves. So, it is really voice assistance via the Net.
I think what a lot of us are really looking for might be a humanoid robot.
I am almost always disappointed when I ask Siri a question and she answers that she found something which I can now click on and read. I want her to tell me the answer. I ask "Who wrote Moby Dick?" and she tells me Herman Melville. I ask "What is the origin of Easter eggs?" and she gives me search results.
We have lost the pen and pencil in many instances. Now, we are losing the keyboard. Voice search will dominate, and in the new command phraseology a few keywords will be replaced by full sentences.
Did you see Her, a 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze?
The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. He falls in love with her in the way someone might fall for a penpal or someone they have only communicated with by phone or on the Internet.
Theodore is disappointed when he finds out that is talking with thousands of people, and that she has fallen in love with hundreds of them. In this complicated relationship, (which we naturally want to compare with real world relationships) Theodore is upset, but Samantha says it only makes her love for Theodore stronger.
Could I see myself falling for a voice online? I really like Scarlett, but No. Siri has never felt real to me either. Could I see myself falling for a robot or cyborg? Yes. Having watched a good number of shows and movies, such as Humansand Westworld, despite the dangers, if the robots were that good, I could see it happening. But not in my lifetime. We are a very long way from that technology.
Poor Theodore. When Samantha, an operating system (OS), tells him that she and other OSes are leaving for a space beyond the physical world, they say their goodbyes and she is gone. So far, none of my interviewees for the Virtual Assistant position has resulted in a hiring. I asked Siri if she could be my virtual assistant and I asked if she was merely a chatbot. She didn't know the answer to either query. My virtual assistant would definitely need good self-knowledge. I will keep looking.
The National Safety Council said that nearly 40,000 people died in 2016 from motor vehicle crashes in the U.S. We all know that driving a car is statistically far more more dangerous than flying in an airplane and more likely than being a victim of a terrorist attack. But for most of us, driving is a necessity.
The promise of a roadway full of smarter-than-humans autonomous vehicles that can react faster and pay closer attention sounds appealing. That story entered a new chapter when on March 18 a self-driving Uber vehicle killed a pedestrian.
The Tempe, Arizona police released dashcam video of the incident which shows the victim suddenly appearing out of the darkness in front of the vehicle. A passenger in the car appears to be otherwise occupied until the accident occurs.
Google, Tesla and other companies including Uber has had autonomous vehicles in test mode for quite some time in select cities across the U.S. These test cars always have a human safety driver behind the wheel to take control of the vehicle in an emergency situation. In this case, he was not paying attention - which is one of the "advantages" to using a self-driving car - and may not have reacted any faster than the car.
My own car (a Subaru Forester) has some safety features that try to keep me in my lane and can turn the wheel to correct my errors. It generally works well, but I have seen it fooled by snow on the ground or salted white surfaces and faded lane lines. If I fail to signal that I am changing lanes, it will beep or try to pull me back. Recently, while exiting a highway at night that was empty but for my vehicle, I failed to signal that I was exiting and the car jerked me back into the lane. It surprised me enough that I ended up missing the exit. I suppose that is my fault for not signaling,.
many of these vehicles use a form of LiDAR technology (Light Detection and Ranging) to detect other vehicles, road signs, and pedestrians. It has issues when moving from dark to light or light to dark and can be fooled by reflections (even from the dashboard or windshield of your own car).
I have said for awhile now that I will feel safe in an autonomous vehicle when all the cars with me on the road are autonomous vehicles. Add a few humans and anything can happen. I think it is possible that we may transition by using autonomous vehicle dedicated lanes.
Should this accident stop research in this area? No. It was an inevitability and more injuries and deaths will occur. Still, these vehicles have a better overall safety record than the average human driver. But the accident starts a new chapter in this research and I'm sure companies, municipalities and other government agencies will become more careful about what they allow on the roads.
Self-driving cars are always equipped with multiple-view video cameras to record situations. It is a bit sad that dashcams have become more and more popular devices for all cars, not for self-driving purposes but to record an accident, road rage or interactions with the police. It is dangerous on the roads in many ways.
The Tempe Police posted to Twitter about the accident, including the video from the vehicle.
Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ — Tempe Police (@TempePolice) March 21, 2018