Something Scientists Rarely Talk About

science ichthys
science ichthys

Scientists talk about science. They don't often talk about religion. Even famous scientists in history - Newton, Darwin, Einstein - were careful about what they said on the subject of religion. When Albert Einstein said that "God does not play dice with the universe," that god was not necessarily the God that people speak of in religious terms.

Two things I recently encountered brought this to mind. One is Things a Computer Scientist Rarely Talks About , a 2001 book of the annotated transcripts of six public lectures given by Donald E. Knuth at MIT. (read an excerpt) Knuth is an American computer scientist, mathematician, and professor emeritus at Stanford University probably best known as an author for the multi-volume The Art of Computer Programming. The lectures move between religion and science (particularly computer science) and Knuth gives credence to the concept of divinity. 

The second thing I stumbled on recently came in an article about Anthony Levandowski in Wired magazine. It portrays him as an unlikely prophet bridging artificial intelligence and religion. He was/is known as an engineer working on self-driving cars. But his newest "startup"i sthe launch of a new religion of artificial intelligence. It is called Way of the Future.

Way of the Future (WOTF) is about creating a peaceful transition about who is in charge of the planet as we move from people in charge to people and machines being in charge. And perhaps even a future when machines are in charge of the humans? 

That future of the singularity seems closer than we might imagine being that technology has already surpassed human abilities in some instances. Of course, beating humans at chess and Go and making faster calculations and predictions or being better at sorting items in a warehouse isn't the same thing as "running the world."

WOTF wants the future transition to be smoother and believes progress shouldn't be feared or prevented. It means that we need machines need to have "rights" too. 

Does Levandowkski really intend WOTF to be a "religion?" Is he willing to abandon the battle of the robotics tech and legal battles between Uber and Waymo for autonomous-vehicle dominance? It turns out that in papers filed with the Internal Revenue Service last year, Levandowski is listed as the “Dean” of the new religion, as also as the CEO of the nonprofit corporation formed to run it.

Those documents certainly sound like a new religion. Their listed activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”

The divine AI will target AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The church - and they do call it a church in their filings, probably for tax reasons - has been doing workshops and educational programs in the San Francisco/Bay Area.

A September 2017 article in Wired is titles "God is a Bot and Anthony Levandowski Is His Messenger." We will see about that.   

Now Accepting Resumes for My Virtual Assistant

I wrote this post yesterday on my One-Page Schoolhouse blog and was rereading it today while eating my lunch. It is about the idea of having a virtual assistant. The one I was imagining (and I think many people imagine) is more of a humanoid robot, an android or a cyborg like something found in stories and movies. But the current reality of virtual assistants is a chatbot or voice assistant like Alexa, Siri and Cortana.

There was a big wow-factor when the first iPhone was released over a decade ago by the power we had in our hands. There were many comparisons to how we were holding a lot more computing power than they had to get those first Americans on the Moon.

Then came virtual assistants which were also pretty amazing, but quite imperfect. Still today, my iPhone's Siri voice is more likely to tell me it found something on the web that might answer my question rather than just answering it.

In kid-like wonder, we ask "her" things like: What does Siri mean? When is your birthday? What do you dream about? Are you a robot? Why do you vibrate? Will you go on a date with me?  And we are amused when the voice answers us in some way.

Though we may associate these voices with an object - a phone or microphone/speaker - those forms may seem very crude in a decade. I read that it is estimated that by 2020 half of all searches will be voice activated. I suspect it may come even sooner. That will change how we interact with the Internet, and how the web itself operates.

Designers humanize virtual assistants with names - Siri, Cortana and Alexa - and sometimes we might forget that we are not talking to a person. In Dan Brown's novel Origin, the characters benefit from a somewhat unbelievably sophisticated and powerful virtual assistant named Winston. Even in reading the novel, I found myself thinking about Winston as a character. I even suspected (wrongly) that he might turn out to be a human with incredible computer access - or even that he was a cyborg.

A cyborg (short for "cybernetic organism") is a being with both organic and biomechatronic body parts - part human, part machine. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.

Would you want your virtual assistant to be a disembodied voice or something more human that could be a companion or colleague?

One big limitation of our current digital assistants is that they really are just connections to the Internet. Yes, being connected to the entirety of the world's knowledge by a verbal connection that learns more about you as you use it could be very useful. But Siri won't make me a cup of tea or rake the leaves. So, it is really voice assistance via the Net.

I think what a lot of us are really looking for might be a humanoid robot.

I am almost always disappointed when I ask Siri a question and she answers that she found something which I can now click on and read. I want her to tell me the answer. I ask "Who wrote Moby Dick?" and she tells me Herman Melville. I ask "What is the origin of Easter eggs?" and she gives me search results.

We have lost the pen and pencil in many instances. Now, we are losing the keyboard. Voice search will dominate, and in the new command phraseology a few keywords will be replaced by full sentences.

Did you see Hera 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze?

The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. He falls in love with her in the way someone might fall for a penpal or someone they have only communicated with by phone or on the Internet.

 

 

Theodore is disappointed when he finds out that is talking with thousands of people, and that she has fallen in love with hundreds of them. In this complicated relationship, (which we naturally want to compare with real world relationships) Theodore is upset, but Samantha says it only makes her love for Theodore stronger.

Could I see myself falling for a voice online? I really like Scarlett, but No. Siri has never felt real to me either. Could I see myself falling for a robot or cyborg?  Yes. Having watched a good number of shows and movies, such as Humans and Westworld, despite the dangers, if the robots were that good, I could see it happening. But not in my lifetime. We are a very long way from that technology.

Poor Theodore. When Samantha, an operating system (OS), tells him that she and other OSes are leaving for a space beyond the physical world, they say their goodbyes and she is gone. So far, none of my interviewees for the Virtual Assistant position has resulted in a hiring. I asked Siri if she could be my virtual assistant and I asked if she was merely a chatbot. She didn't know the answer to either query. My virtual assistant would definitely need good self-knowledge. I will keep looking.

A New Chapter for Autonomous Vehicles

The National Safety Council said that nearly 40,000 people died in 2016 from motor vehicle crashes in the U.S. We all know that driving a car is statistically far more more dangerous than flying in an airplane and more likely than being a victim of a terrorist attack. But for most of us, driving is a necessity.

The promise of a roadway full of smarter-than-humans autonomous vehicles that can react faster and pay closer attention sounds appealing. That story entered a new chapter when on March 18 a self-driving Uber vehicle killed a pedestrian.

The Tempe, Arizona police released dashcam video of the incident which shows the victim suddenly appearing out of the darkness in front of the vehicle. A passenger in the car appears to be otherwise occupied until the accident occurs.

Google, Tesla and other companies including Uber has had autonomous vehicles in test mode for quite some time in select cities across the U.S. These test cars always have a human safety driver behind the wheel to take control of the vehicle in an emergency situation. In this case, he was not paying attention - which is one of the "advantages" to  using a self-driving car - and may not have reacted any faster than the car.

My own car (a Subaru Forester) has some safety features that try to keep me in my lane and can turn the wheel to correct my errors. It generally works well, but I have seen it fooled by snow on the ground or salted white surfaces and faded lane lines. If I fail to signal that I am changing lanes, it will beep or try to pull me back. Recently, while exiting a highway at night that was empty but for my vehicle, I failed to signal that I was exiting and the car jerked me back into the lane. It surprised me enough that I ended up missing the exit. I suppose that is my fault for not signaling,.

many of these vehicles use a form of LiDAR technology (Light Detection and Ranging) to detect other vehicles, road signs, and pedestrians. It has issues when moving from dark to light or light to dark and can be fooled by reflections (even from the dashboard or windshield of your own car).

I have said for awhile now that I will feel safe in an autonomous vehicle when all the cars with me on the road are autonomous vehicles. Add a few humans and anything can happen. I think it is possible that we may transition by using autonomous vehicle dedicated lanes.

Should this accident stop research in this area? No. It was an inevitability and more injuries and deaths will occur. Still, these vehicles have a better overall safety record than the average human driver. But the accident starts a new chapter in this research and I'm sure companies, municipalities and other government agencies will become more careful about what they allow on the roads.

Self-driving cars are always equipped with multiple-view video cameras to record situations. It is a bit sad that dashcams have become more and more popular devices for all cars, not for self-driving purposes but to record an accident, road rage or interactions with the police. It is dangerous on the roads in many ways.


The Tempe Police posted to Twitter about the accident, including the video from the vehicle.

Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ   — Tempe Police (@TempePolice) March 21, 2018

Analyzing Cambridge Analytica, Facebook and You

The Cambridge Analytica scandal involving Facebook hit this month because of its involvement in the election of Donald Trump in 2016. The company used an app developed legitimately by a Cambridge University researcher, Dr. Aleksandr Kogan, as a personality survey called "This is Your Digital Life."

I recall learning about that app about 3 years ago in a presentation at an EdTech conference. By using it as a quiz on Facebook, about 270,000 users gave permission (because most people are unaware of the access they allow) to their data which was collected but then used to additionally collect some public data from their friends.

I suspect a majority of social media users are unaware of how their data is used, and what permissions they have granted (perhaps by default in some instances).

Have you ever used your Facebook login as a way to sign in to another website or app? It asks you if you want to login using your Facebook ID and that seems to save a step or two and is great if you forgot your actual login to that other site. 

When those Facebook users took the "This is your digital life" quiz using their Facebook login, they allowed that app's developer to tap into all of the information in their Facebook profile (that includes your name, where you live, email address and friends list).  [Note: Currently, apps are no longer permitted to collect data from your Facebook friends.]

I don't give Dr. Kogan, Cambridge Analytica or Facebook a pass on this activity even if users did opt in. Kogan shared it with Cambridge Analytica which Facebook says that was against its policy. Facebook says it asked Cambridge Analytica to delete all of the data back in 2015. Facebook also claims that it only recently found out that wasn't done.

A lot of people seem to have given up on privacy, accepting it as something we just can't control any more. But there is a lot you can and should do.

settings

For example, a very simple change to make in your Facebook privacy settings is to "Limit The Audience for Old Posts on Your Timeline." That means that posts on your timeline that you've shared with Friends of friends, and Public posts, will now be shared only with Friends. Anyone tagged in these posts, and their friends, may also still see these posts, but the public (which includes apps) will not be able to access them legitimately.

Facebook's API, called Platform, allows third-party apps and websites to integrate with your Facebook account and exchange data with them via developer tools. It can be convenient for users, such as decreasing the number of login/password combinations you need to remember, but it has potential for abuse.

When you use the "Log in With Facebook" feature on a site, you grant a third-party app or service access to your Facebook account. It will ask for permission to receive specific Facebook data from you - email address, birthdate, gender, public posts, likes and also things beyond your basic profile info. I have seen cases where when I deny access to some information, it tells me the app can't be loaded. That is a warning. But some legitimate apps, like the scheduling apps Hootsuite and Buffer, do need a lot of permissions in order to allow them to post as you on social networks like Facebook, Twitter, LinkedIn and Instagram. In these cases, by using the app I need to trust that developer and the service it is connecting to via an API.

Being educated about how technology works and knowing how you can protect your own data and privacy is more important than ever. And, of course, you can always not use a service that doesn't seem to help you do that.

Are All Schools Prep Schools?

What do you think of when you hear the term "prep school?" Do you think of elite, private schools that look and act like little Ivy League colleges?

A university-preparatory school or college-preparatory school (shortened to preparatory school, prep school, or college prep) is a type of secondary school, but the term can refer to public, private independent or parochial schools primarily designed to prepare students for higher education.

But aren't all high schools preparation for college? That answer has varied over the centuries. While secondary schools were once only for middle and upper class kids who might go on to higher education, schools also went through a period of being "comprehensive" and trying to provide preparation for those going on to college, and for for those going on to a job. 

In the early 20th century, there were efforts to imitate German-style industrial education in the United States. Employers wanted wokers who were "trained" more than "educated." Teachers of high school academic subjects and some colleges thought the preparation for college was being watered down. So, vocational education emerged as a way to prepare people not planning on college to work in various jobs, such as a trade, a craft, or as a technician.

Historically, the German Gymnasium also included in its overall accelerated curriculum post secondary education at college level and the degree awarded substituted for the bachelor's degree (Baccalaureat)[1] previously awarded by a college or university so that universities in Germany became exclusively graduate schools.

Préparatoires aux grandes écoles (Higher School Preparatory Classes), commonly called classes prépas or prépas, are part of the French post-secondary education system. These two very intensive years (extendable to three or four years) act as a preparatory course with the main goal of training undergraduate students for enrollment in one of the grandes écoles. The workload is very demanding - between 35 and 45 contact hours a week, plus usually between 4 and 6 hours of written exams, plus between 2 and 4 hours of oral exams a week and homework filling all the remaining free time.