Do Massive Open Online Courses (MOOC) seem like old news now? It has been more than six years since they hit learning and the early excitement and fears have certainly quieted. But the MOOC is still alive and active, though changed from its 2012 heyday.
I still see lots of headlines in my feed, but what is a MOOC is 2018?
A report at the start of this year looked back at MOOCs in 2017 (authored by Class Central founder Dhawal Shah) that had some unsurprising and surprising findings.
Unsurprisingly, the number of MOOCs continues to grow and more are available for enrollment throughout the year. There is more diversity in the subjects offered. The courses offered often should not be called MOOCs because they are not open in content or open in being free. Paid content has continued to increase and it seems that the number of free MOOCs is reduced.
Perhaps unexpectedly, the number of new learners decreased in 2017. There were 20 million learners taking their first MOOC in 2017. That sounds like a lot, but the number in 2016 was 23 million.
The report's analysis says that this should not be viewed so much as a drop in popularity, but as an indication that MOOC providers have "found their audience."
The early MOOCs which very much championed "casual learning" and expected low completion rates and no profits, seem to have given a lot of ground away to re/training and professionals who are interested in certificates and are willing to pay for them.
More than 800 universities are offering MOOCs and get about 78 million students into their online classes. Some learners pay. Most do not. Some get credit. Most do not. All the schools get exposure for their brand, and all the learners benefit.
Roughly 87 million people had their Facebook data stolen by the political research firm Cambridge Analytica.
On April 10 and 11 Mark Zuckerberg testified before Congress. The reviews were mixed. Some said he was robotic and evasive. I thought he did a good job in the face of some ignorant questions by people who clearly don't understand Facebook, social media or modern technology - and even mispronounced Zuckerberg's name several different ways.
The day before the hearings Facebook finally notified the people who had their information grabbed by Cambridge Analytica. It is supposed to be about 70 million Americans and other users in the UK, Indonesia, and the Philippines.
I saw the notification at the top of my Facebook newsfeed when I logged in. There was also a button for changing my privacy settings. Probably everyone, even if your information wasn’t captured and used by Cambridge Analytica, you should check and tighten up those settings.
"Based on our investigation, you don't appear to have logged into "This Is Your Digital Life" with Facebook before we removed it from our platform in 2015. However, a friend of yours did log in. As a result, the following information was likely shared with "This Is Your Digital Life": Your public profile, Page likes, birthday and current city. A small number of people who logged into "This Is Your Digital Life" also shared their own News Feed, timeline, posts and messages which may have included posts and messages from you. They may also have shared your hometown."
One of the questions that Zuckerberg was asked was about the fact that Cambridge Analytica wasn’t the only company that was misusing Facebook data. The company suspended at least two more research companies before the hearings: CubeYou was also misusing data from personality quizzes, along with AggregateIQ.
After a rash of people saying they were quitting Facebook and the stock taking a hit, during the hearings the stock rebounded and I am seeing less talk about quitting. Though there are plenty of social networks, none has all the features of Facebook and has been able to hold a large user base. One Senator asked if Facebook is a monopoly. Zuckerberg said No, but was unable to really give an example of a major competitor. Yes, they overlap with networks like Twitter and their own Instagram, but no one really does it all.
Zuckerberg made the point repeatedly that Facebook has already made many positive changes since the Cambridge Analytica breach an is still doing them now ahead of any possible regulation by Congress. Are all the issues corrected? No. Are things better with Facebook and privacy? Yes. Will it or some competitor ever be the perfect social network? No way.
Scientists talk about science. They don't often talk about religion. Even famous scientists in history - Newton, Darwin, Einstein - were careful about what they said on the subject of religion. When Albert Einstein said that "God does not play dice with the universe," that god was not necessarily the God that people speak of in religious terms.
Two things I recently encountered brought this to mind. One is Things a Computer Scientist Rarely Talks About, a 2001 book of the annotated transcripts of six public lectures given by Donald E. Knuth at MIT. (read an excerpt) Knuth is an American computer scientist, mathematician, and professor emeritus at Stanford University probably best known as an author for the multi-volume The Art of Computer Programming. The lectures move between religion and science (particularly computer science) and Knuth gives credence to the concept of divinity.
The second thing I stumbled on recently came in an article about Anthony Levandowski in Wired magazine. It portrays him as an unlikely prophet bridging artificial intelligence and religion. He was/is known as an engineer working on self-driving cars. But his newest "startup"i sthe launch of a new religion of artificial intelligence. It is called Way of the Future.
Way of the Future (WOTF) is about creating a peaceful transition about who is in charge of the planet as we move from people in charge to people and machines being in charge. And perhaps even a future when machines are in charge of the humans?
That future of the singularity seems closer than we might imagine being that technology has already surpassed human abilities in some instances. Of course, beating humans at chess and Go and making faster calculations and predictions or being better at sorting items in a warehouse isn't the same thing as "running the world."
WOTF wants the future transition to be smoother and believes progress shouldn't be feared or prevented. It means that we need machines need to have "rights" too.
Does Levandowkski really intend WOTF to be a "religion?" Is he willing to abandon the battle of the robotics tech and legal battles between Uber and Waymo for autonomous-vehicle dominance? It turns out that in papers filed with the Internal Revenue Service last year, Levandowski is listed as the “Dean” of the new religion, as also as the CEO of the nonprofit corporation formed to run it.
Those documents certainly sound like a new religion. Their listed activities will focus on “the realization, acceptance, and worship of a Godhead based on Artificial Intelligence (AI) developed through computer hardware and software.”
The divine AI will target AI professionals and “laypersons who are interested in the worship of a Godhead based on AI.” The church - and they do call it a church in their filings, probably for tax reasons - has been doing workshops and educational programs in the San Francisco/Bay Area.
I wrote this post yesterday on my One-Page Schoolhouse blog and was rereading it today while eating my lunch. It is about the idea of having a virtual assistant. The one I was imagining (and I think many people imagine) is more of a humanoid robot, an android or a cyborg like something found in stories and movies. But the current reality of virtual assistants is a chatbot or voice assistant like Alexa, Siri and Cortana.
There was a big wow-factor when the first iPhone was released over a decade ago by the power we had in our hands. There were many comparisons to how we were holding a lot more computing power than they had to get those first Americans on the Moon.
Then came virtual assistants which were also pretty amazing, but quite imperfect. Still today, my iPhone's Siri voice is more likely to tell me it found something on the web that might answer my question rather than just answering it.
In kid-like wonder, we ask "her" things like: What does Siri mean? When is your birthday? What do you dream about? Are you a robot? Why do you vibrate? Will you go on a date with me? And we are amused when the voice answers us in some way.
Though we may associate these voices with an object - a phone or microphone/speaker - those forms may seem very crude in a decade. I read that it is estimated that by 2020 half of all searches will be voice activated. I suspect it may come even sooner. That will change how we interact with the Internet, and how the web itself operates.
Designers humanize virtual assistants with names - Siri, Cortana and Alexa - and sometimes we might forget that we are not talking to a person. In Dan Brown's novel Origin, the characters benefit from a somewhat unbelievably sophisticated and powerful virtual assistant named Winston. Even in reading the novel, I found myself thinking about Winston as a character. I even suspected (wrongly) that he might turn out to be a human with incredible computer access - or even that he was a cyborg.
A cyborg (short for "cybernetic organism") is a being with both organic and biomechatronic body parts - part human, part machine. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline.
Would you want your virtual assistant to be a disembodied voice or something more human that could be a companion or colleague?
One big limitation of our current digital assistants is that they really are just connections to the Internet. Yes, being connected to the entirety of the world's knowledge by a verbal connection that learns more about you as you use it could be very useful. But Siri won't make me a cup of tea or rake the leaves. So, it is really voice assistance via the Net.
I think what a lot of us are really looking for might be a humanoid robot.
I am almost always disappointed when I ask Siri a question and she answers that she found something which I can now click on and read. I want her to tell me the answer. I ask "Who wrote Moby Dick?" and she tells me Herman Melville. I ask "What is the origin of Easter eggs?" and she gives me search results.
We have lost the pen and pencil in many instances. Now, we are losing the keyboard. Voice search will dominate, and in the new command phraseology a few keywords will be replaced by full sentences.
Did you see Her, a 2013 American romantic science-fiction drama film written, directed, and produced by Spike Jonze?
The film follows Theodore Twombly (Joaquin Phoenix), a man who develops a relationship with Samantha (Scarlett Johansson), an intelligent computer operating system personified through a female voice. He falls in love with her in the way someone might fall for a penpal or someone they have only communicated with by phone or on the Internet.
Theodore is disappointed when he finds out that is talking with thousands of people, and that she has fallen in love with hundreds of them. In this complicated relationship, (which we naturally want to compare with real world relationships) Theodore is upset, but Samantha says it only makes her love for Theodore stronger.
Could I see myself falling for a voice online? I really like Scarlett, but No. Siri has never felt real to me either. Could I see myself falling for a robot or cyborg? Yes. Having watched a good number of shows and movies, such as Humansand Westworld, despite the dangers, if the robots were that good, I could see it happening. But not in my lifetime. We are a very long way from that technology.
Poor Theodore. When Samantha, an operating system (OS), tells him that she and other OSes are leaving for a space beyond the physical world, they say their goodbyes and she is gone. So far, none of my interviewees for the Virtual Assistant position has resulted in a hiring. I asked Siri if she could be my virtual assistant and I asked if she was merely a chatbot. She didn't know the answer to either query. My virtual assistant would definitely need good self-knowledge. I will keep looking.
The National Safety Council said that nearly 40,000 people died in 2016 from motor vehicle crashes in the U.S. We all know that driving a car is statistically far more more dangerous than flying in an airplane and more likely than being a victim of a terrorist attack. But for most of us, driving is a necessity.
The promise of a roadway full of smarter-than-humans autonomous vehicles that can react faster and pay closer attention sounds appealing. That story entered a new chapter when on March 18 a self-driving Uber vehicle killed a pedestrian.
The Tempe, Arizona police released dashcam video of the incident which shows the victim suddenly appearing out of the darkness in front of the vehicle. A passenger in the car appears to be otherwise occupied until the accident occurs.
Google, Tesla and other companies including Uber has had autonomous vehicles in test mode for quite some time in select cities across the U.S. These test cars always have a human safety driver behind the wheel to take control of the vehicle in an emergency situation. In this case, he was not paying attention - which is one of the "advantages" to using a self-driving car - and may not have reacted any faster than the car.
My own car (a Subaru Forester) has some safety features that try to keep me in my lane and can turn the wheel to correct my errors. It generally works well, but I have seen it fooled by snow on the ground or salted white surfaces and faded lane lines. If I fail to signal that I am changing lanes, it will beep or try to pull me back. Recently, while exiting a highway at night that was empty but for my vehicle, I failed to signal that I was exiting and the car jerked me back into the lane. It surprised me enough that I ended up missing the exit. I suppose that is my fault for not signaling,.
many of these vehicles use a form of LiDAR technology (Light Detection and Ranging) to detect other vehicles, road signs, and pedestrians. It has issues when moving from dark to light or light to dark and can be fooled by reflections (even from the dashboard or windshield of your own car).
I have said for awhile now that I will feel safe in an autonomous vehicle when all the cars with me on the road are autonomous vehicles. Add a few humans and anything can happen. I think it is possible that we may transition by using autonomous vehicle dedicated lanes.
Should this accident stop research in this area? No. It was an inevitability and more injuries and deaths will occur. Still, these vehicles have a better overall safety record than the average human driver. But the accident starts a new chapter in this research and I'm sure companies, municipalities and other government agencies will become more careful about what they allow on the roads.
Self-driving cars are always equipped with multiple-view video cameras to record situations. It is a bit sad that dashcams have become more and more popular devices for all cars, not for self-driving purposes but to record an accident, road rage or interactions with the police. It is dangerous on the roads in many ways.
The Tempe Police posted to Twitter about the accident, including the video from the vehicle.
Tempe Police Vehicular Crimes Unit is actively investigating the details of this incident that occurred on March 18th. We will provide updated information regarding the investigation once it is available. pic.twitter.com/2dVP72TziQ — Tempe Police (@TempePolice) March 21, 2018