Are You Prometheus or Zeus?

Prometheus Brings Fire by Heinrich Friedrich Füger
Prometheus Brings Fire by Heinrich Friedrich Füger, 1817

 

Do you know the myth of Prometheus and his argument with Zeus?  I am reading Stephen Fry's books that are retellings of the myths of Ancient Greece, Mythos and the companion volume Heroes, and he has suggested that we are approaching a similar moment in our history. 

I don't know if you can see yourself as Jason aboard the Argo ques ting for the Golden Fleece, or as Oedipus solving the riddle of the Sphinx. But I think we might divide all of us into two groups by deciding on which side we stand when it comes to artificial intelligence as "personified" by any robot with a human appearance and advanced artificial intelligence.

The myth that applies is the story of Prometheus and his argument with Zeus.

In Greek mythology, Prometheus, whose name means "forethought" is credited with the creation of man from clay, and also the one who defies Zeus by stealing fire and giving it to humanity.

To humans, his theft is heroic. Fire, perhaps our first technology, enabled progress, civilization and the human arts and sciences.

Prometheus believed that humans needed and deserved fire. Zeus did not.

In Hesiod's version of the story of Prometheus and the theft of fire it is clear that Zeus withheld not only fire from humanity, but also "the means of life." Zeus feared that if humans had fire and all that it would lead them to, they would no longer need the gods.

Fry writes that “The Titan Prometheus made human beings in clay. The spit of Zeus and the breath of Athena gave them life. But Zeus refused to allow us to have fire. And I think fire means both literal fire – to allow us to become Bronze Age man, to create weapons and to cook meat. To frighten the fierce animals and become the strongest, physically and technically. But also the internal fire of self-consciousness and creativity. The divine fire. Zeus did not want us to have it. And Prometheus stole fire from heaven and gave it to man.”

If we think about a modern Prometheus, perhaps we can make him into a scientist who has created a very powerful android.

It is fitting that the word "android" was coined from the Greek andr-, meaning "man" (male, as opposed to anthr?p-, meaning human being) and the suffix -oid, meaning "having the form or likeness of. (We use "android" to refer to any human-looking robot, but a robot with a female appearance can also be referred to as a "gynoid.")

Our Prometheus the AI scientist is ready to give his android to the world. But his boss, Mr. Zeus, is opposed. What will happen when the android become sapient?" Zeus asks. Sapience is the ability of an organism or entity to act with judgment. "And what if these androids also become sentient?" Zeus asks. Sentience is the capacity to feel, perceive or experience subjectively.

Stephen Fry takes up that argument:

"In a hundred years time, we can guarantee there will be sapient beings on this earth that have been intelligently designed. You could call them robots, you could call them compounds of augmented biology and artificial intelligence, but they will exist. The future is enormous, it has never been more existentially transformative.

Will the Prometheus who makes the first piece of really impressive robotic AI – like Frankenstein or the Prometheus back in the Greek myth – have the question: do we give it fire? Do we give these creatures self-knowledge, self-consciousness? An autonomy that is greater than any other machine has ever had and would be similar to ours? In other words: shall we be Zeus and deny them fire because we are afraid of them? Because they will destroy us? The Greeks, and the human beings, did destroy the gods. They no longer needed them. And it is very possible that we will create a race of sapient beings who will not need us.”

So, are you like Prometheus wanting mankind to have these highly evolved robots? Or do you agree with Zeus that they will eventually destroy us?

 

Here is an excerpt concerning this idea from an interview Stephen Fry did in Holland.
(Full interview at https://dewerelddraaitdoor.bnnvara.nl/nieuws/de-twee-kanten-van-stephen-fry)

The AI of Job Search

"Human-centered design paired with human-centric AI is key to the future of work," says Sara Ortloff Khoury, Director of UX Design at Google Together. That is a group that considers the "critical user journeys of what people do every day." That often means looking at tasks that humans don’t want to do, but automation can do. That might mean that an invite in your email is added to your calendar automatically or a contact you email frequently is moved up in priority, and now it also has to do with looking for a job or looking for a new employee.

As part of "designing the future of work," Khoury's team developed 3 foundational human-centered design AI-for-enterprise principles. The principles are:
1. enhance what people can achieve at work
2. anticipate what people need at work
3. reduce bias and increase opportunities

I keep reading predictions by technologists and educators that say things like that by 2030 one-third of jobs will require skills that aren’t common or even don’t exist today.

What the Google folks did in search was put job search directly on Google Search. That means that a search will produce up-to-date job descriptions as well as information about companies, salaries, commute-times, and more. I tested that by simply searching on jobs blogger and it resulted in 100 jobs with the ones nearest me at the top.

They have also made Cloud Talent Solution, which offers plug-and-play access to Google’s AI and search capabilities for large companies to find talent.  (They report that job boards like CareerBuilder, and employers like Johnson and Johnson already use it.).

They also launched about a year ago Hire, which is a recruiting app that integrates with G Suite and is suited to small and medium-sized businesses.

New Jobs in an AI Machine Learning World

You can argue about the good and bad of AI, but there is no argument that artifical intelligence is here and it is affecting jobs. Elon Musk says he fears where AI will ultimately lead, but he uses AI in his Tesla vehicles. 

I keep hearing that AI will free humans of boring drudgery jobs and give us more free time. Then, we can do human work rather than machine work. I also hear that for all the jobs lost to AI there will be at least half as many new ones created.

bookThe book Human + Machine: Reimagining Work in the Age of AI, examines organizations that deploy AI systems — from machine learning to computer vision to deep learning. The authors found that AI systems are augmenting human capabilities and enabling people and machines to work collaboratively, changing the very nature of work and transforming businesses.  

A symbiosis between man and machine is not here now, but it is already being called the third wave of business transformation and it concerns using adaptive processes. (The first wave was standardized processes, and the second was automated processes.)

Even when AI has advanced and humans and machines are symbiotic partners, humans will be needed. In this book, they identify three broad types of new jobs in the "missing middle" of the third wave. 

Trainers will be needed to teach AI systems how they should perform, helping natural-language processors and language translators make fewer errors and teaching AI algorithms how to mimic human behaviors.

Explainers will be needed to bridge the gap between technologists and business leaders, explaining the inner workings of complex algorithms to nontechnical professionals. 

The third category of jobs will involve sustainers who will ensure that AI systems are operating as designed. They might be in roles such as context designers, AI safety engineers and ethics compliance managers. For example, a safety engineer will try to anticipate and address the unintended consequences of an AI system, and a ethics complaince manager acts as ombudsmen for upholding generally accepted human values and morals.

And for education? These jobs will require new skills. The skills the authors describe all sound unfamiliar, as I suppose they should. Arewe ready to teach Rehumanizing Time, Responsible Normalizing, Judgment Integration, Intelligent Interrogation, Bot-Based Empowerment, Holistic Melding, Reciprocal Apprenticing, and Relentless Reimagining.

Their labels may be unfamiliar, but the skills can also be seen as extensions or advancements of more familiar ones in a new contect. 

For example, "Judgment Integration" is needed when a machine is uncertain about what to do or lacks necessary business or ethical context in its reasoning model. A human ready to be smart about sensing where, how and when to step in means that human judgment will still be needed even in a reimagined process. 

Imagine that autonomus vehicle approaching at high speed a deer and a child in the road ahead. It needs to swerve, but it will also need to hit one of them in its avoidance maneuver. Which would it choose? The decision will not be based on how we feel about a child versus a wild animal - unless a human has been involved in the process earlier.


Read more in the book and in "What Are The New Jobs In A Human + Machine World?" by Paul R. Daugherty and H. James Wilson on forbes.com

The Reverse Turing Test for AI

Turing Test
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.

The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.

Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.

Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.

Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."

Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses  Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.

This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.

Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.

To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments.  It is impressively scary.

Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.

The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing. 

But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley." 

I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer? 

The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.

Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.

Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.