Learning How to Learn Online

learnI have been reading about some of the sessions at the International Conference on E-Learning in the Workplace (ICELW) that occurred this month at Columbia University. 

One keynoter was Dr. Barbara Oakley, Professor of Engineering at Oakland University in Rochester. She is known for her course "Learning How to Learn," which is sometimes described as being "the world’s most popular MOOC." It has had more than 2 million participants. There may be MOOCs with more participants, but her course has been translated into multiple languages and had some serious media attention. It is a broader kind of course and not really aimed at a college audience alone. It fits into a workplace focused conference and lifelong learning. It is described as a course that “gives you easy access to the invaluable learning techniques used by experts in art, music, literature, math, science, sports, and many other disciplines” to learn.

I haven't taken this course, but I plan to this summer. From what I have read, many of the concepts are ones I know from my own teaching and education courses. For example, “how the brain uses two very different learning modes and how it encapsulates (“chunks”) information.” That is something I learning a long time ago in teaching secondary school, and also used extensively in doing instructional design on other professors' courses as they moved online.

I was more interested in knowing what her "secrets" would be for building and teaching that MOOC. I haven't seen any video from the conference, but here are some bits I have found about her session.  

She uses the "Learning How to Learn" principles of learning that are being taught in the course in the design of the course. She is not adverse to PowerPoint slides but uses simple visuals to chunk key ideas.

Oakley emphasized the impact of integrating lessons from neuroscience. One of those is neuro reuse theory. The theory was a way to explain the underlying neural processes which allow humans to acquire recently invented cognitive capacities. It attempts to explain how the brain responds to new cognitive processes - think of many of our digital encounters - which are cultural inventions too modern to be the products of evolution. Simple application is her use of metaphors (a key element of neural reuse theory) because they allow students to a quick way to encounter new ideas. 

She emphasizes paying attention to production values in creating a course. She did her course production herself at home and says the cost was $5000. I assume that was for software, video hardware etc. Many schools now have production facilities for online course development. 

Bottom-up (as opposed to top-down) attentional mechanisms are a theory from neuroscience that she uses to keep attention on the screen.  Bottom-up mechanisms are thought to operate on raw sensory input, rapidly and
involuntarily shifting attention
to salient visual features of potential importance. Think of the sudden movement that could be a predator. Top-down mechanisms implement our longer-term cognitive strategies, biasing attention toward something like a learned shape or color that signals a predator.

This is a more complex topic than can be covered in a blog post but it is easy to accept that the brain is limited in its capacity to process all sensory stimuli in our sensory-overload physical world. The brain relies on the cognitive process of attention to focus neural resources according to the contingencies of the moment. You can attention into two functions. Bottom-up attention is attention guided by externally driven factors to stimuli. That could be the bright colored popup ad on a screen. Instructional designers can make use of techniques that marketers and game designers have long used. Top-down attention refers to internal guidance of attention based on factors such as prior knowledge and current goals. The overall organizational structure of a course - weekly elements, labels, icons - can take advantage of top-down attention.

She recommended the use of "unexpected humor" to help maintain interest, which can also be a bottom-up technique.

Wherever practicable, theory is instantiated with examples drawn from personal stories.

Overall, this is all about trying harder to engage learners. Oakley pointed out that in a MOOC learners aren’t "caged up like students on campus." MOOC learners are free-range learners - free to come and go, free to stop paying attention or attending class - and if course production values are weak, students are more likely to tune out.

In designing and teaching an online course in the traditional college/tuition/credit/degree situation, we do have students caged more, but that doesn't mean their brains operate differently.

One of Oakley's earlier books is A Mind for Numbers with the subtitle How to Excel at Math and Science (Even If You Flunked Algebra) and her new book this summer is Learning How to Learn whose subtitle is How to Succeed in School Without Spending All Your Time Studying; A Guide for Kids and Teens. Those subtitles remind me that these book and the topics they address are lifelong learning concerns, though certainly of interest to K-20 teachers.

I am planning to take her course this summer before I embark on a new course design project. See coursera.org/learn/learning-how-to-learn I'll follow up on this post when I finish. If I finish. If I don't finish, I guess I'll make some analysis of why - was it me or the course?



Digital Humanities and Open Pedagogy

human network

I see that the Google Science Fair is back and, though many K-12 teachers are at the end of their academic year, this summer is the time to plan for what students could do in the fall. This seems like a "science" activity, but this is where the phrase "digital humanities" should be.

Looking at the the website googlesciencefair.com, you find projects that take the science well beyond the science classroom. Closely related are the activities in Google's Applied Digital Skills curriculum. Here you can find some well-constructed lessons that can be done in as little as an hour and ones that could stretch across a week or unit.

For example, one suitable for middle and high school students is on creating a resume. It's something I did with students decades ago in a non-digital way. The skills involved here are many. Obviously, there is the writing, some research and some analysis of your own skills and ambitions. There are also the more digital forms of collaboration, document formatting and submission. I did this with undergrads a few years ago and required each of them to research and submit their resume to an internship opportunity. 

A longer activity that fits in so well with topics currently at the top of the news is about Technology, Ethics, and Security. Students research technology risks and dangers, explore solutions, and create a report to communicate their findings.

I would also note that the digital humanities must include what humanities teachers do in their work. 

Quizzes in Google Forms have been around for a few years and educators have used them for class assessments and in unintended ways as a tool. New features were recently added based on feedback from teachers' creative uses of the Quizzes. 

One example is that now, using Google’s machine learning, Forms can now predict the correct answer as a teacher types the question. It can also provide options for wrong answers. A simple example is a quiz on U.S. capitals would use this feature to "predict" the correct capitals for every state.

That doesn't mean that Google doesn't have a special interest in the computer science side of eduction. They offer special resources in those areas and professional development grants for CS educators to support those in Europe, the Middle East and Africa.

I don't want to sound like an advertisement for Google - though advertising free and open resources isn't like selling something. Much of what the digital humanities can do moves teachers into an "open pedagogy." It changes the way we teach. 

This is more important than just finding resources.

David Wiley has written
"Hundreds of thousands of words have been written about open educational resources, but precious little has been written about how OER – or openness more generally – changes the practice of education. Substituting OER for expensive commercial resources definitely save money and increase access to core instructional materials. Increasing access to core instructional materials will necessarily make significant improvements in learning outcomes for students who otherwise wouldn’t have had access to the materials (e.g., couldn’t afford to purchase their textbooks). If the percentage of those students in a given population is large enough, their improvement in learning may even be detectable when comparing learning in the population before OER adoption with learning in the population after OER adoption. Saving significant amounts of money and doing no harm to learning outcomes (or even slightly improving learning outcomes) is clearly a win. However, there are much bigger victories to be won with openness."

Too much emphasis when talking about OER is on free textbooks and cost savings and not enough on the many other resources available that allow educators to customize their curriculum and even allow for individual differences. The longtime practice of curriculum designed around a commercial textbook needs to end. 

I have written here about what I called Open Everything. What I am calling now Open Pedagogy would be under that umbrella term. Others have called this pedagogy Open Educational Practices (OEP). In either case, it is the use of Open Educational Resources for teaching and learning in order to innovate the learning process.  In this, I include the open sharing of not only the resources, but also of the teaching practices.

Currently, I would say the level of openness we see is low. Others have defined the levels as: Low - teachers believe they know what learners have to learn. A focus on knowledge transfer. Medium - Predetermined Objectives (closed environment) but, using open pedagogical models and encourage dialogue and Problem-based learning. And the goal is for the highest level when Learning Objectives and pathways are highly governed by the learners.

 

The Reverse Turing Test for AI

Turing Test
Google Duplex has been described as the world's most lifelike chatbot. At the Google IO event in May 2018, Google revealed this extension of the Google Assistant that allows it to carry out natural conversations by mimicking human voice. Duplex is still in development and will receive further testing during summer 2018.

The assistant can autonomously complete tasks such as calling to book an appointment, making a restaurant reservation, or calling the library to verify their hours. Duplex can complete most tasks autonomously, it can also recognize situations that it is unable to complete and then signal a human operator to finish the task.

Duplex speaks in a more natural voice and language by incorporating "speech disfluencies" such as filler words like "hmm" and "uh" and using common phrases such as "mhm" and "gotcha." It also is programed to use a more human-like intonation and response latency.

Does this sound like a wonderful advancement in AI and language processing? Perhaps, but it has also been met with some criticism.

Are you familiar with the Turing Test? Developed by Alan Turing in 1950, it is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. For example, when communicating with a machine via speech or text, can the human tell that the other participant is a machine? If the human can't tell that the interaction is with a machine, the machine passes the Turing Test.

Should a machine have to tell you if it's a machine? After the Duplex announcement, people started posting concerns about the ethical and societal questions of this use of artificial intelligence.

Privacy - a real hot button issue right now - is another concern. Your conversations with Duplex are recorded in order for the virtual assistant to analyze and respond. Google later issued a statement saying, "We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified."

Another example of this came to me on an episode of Marketplace Tech with Molly Wood that discusses  Microsoft's purchase of a company called Semantic Machines which works on something called "conversational AI." That is their term for computers that sound and respond like humans.

This is meant to be used with digital assistants like Microsoft's Cortana, Apple's Siri, Amazon's Alexa or Bixby on Samsung. In a demo played on the podcast, the humans on the other end of the calls made by the AI assistant did not know they were talking to a computer.

Do we need a "Turing Test in Reverse?" Something that tells us that we are talking to a machine? In that case, a failed Turing test result is what we would want to tell us that we are dealing with a machine and not a human.

To really grasp the power of this kind of AI assistant, take a look/listen to this excerpt from the Google IO keynote where you hear Duplex make two appointments.  It is impressively scary.

Things like Google Duplex is not meant to replace humans but to carry out very specific tasks that Google calls "closed domains." It won't be your online therapist, but it will book a table at that restaurant or maybe not mind being on the phone for 22 minutes of "hold" to deal with motor vehicles.

The demo voice does not sound like a computer or Siri or most of the computer voices we have become accustomed to hearing. 

But is there an "uncanny valley" for machine voices as there is for humanoid robots and animation? That valley is where things get too close to human and we are in the "creepy treehouse in the uncanny valley." 

I imagine some businesses would be very excited about using these AI assistants to answer basic service, support and reservation calls. Would you be okay in knowing that when you call to make that dentist appointment that you will be talking to a computer? 

The research continues. Google Duplex uses a recurrent neural network (RNN) which is beyond my tech knowledge base, but this seems to be the way ahead for machine learning, language modeling and speech recognition.

Not having to spend a bunch of hours each week on the phone doing fairly simple tasks seems like a good thing. But if AI assistant HAL refuses to open the pod bay doors, I'm going to panic.

Will this technology be misused? Absolutely. That always happens, no matter how much testing we do. Should we move foraward with the research? Well, no one is asking for my approval, but I say yes.

Have You Noticed a Lot of Updates to User Agreements Lately?

lockedYou probably have received word via email or in apps lately about changes to company privacy and security agreements. Many companies are updating their privacy policy to make it "more clear and transparent." Why the sudden interest?

That was what a friend asked me recently. He surmised that it had "something to do with all the Facebook issues." That is partially correct. Having Mark Zuckerberg testify to the U.S. Senate and then to the European Parliament certainly put a spotlight on these issues.

But what really pushed companies was the EU's General Data Protection Regulation (GDPR) which went into effect this week. Since most websites are global, even if they don't think of themselves as being global, most big companies decided to adopt the GDPR standards for everyone, including their U.S. clients.

What I am seeing (yes, I read the fine print) is that they have added more detail about the information they collect, how they process that data, and how you can control your data. They may have updates on how they use cookies, for example, or how you can change who else gets to see your data. Some of these options have been around for awhile, but most users either didn't know about them or just didn't want to be bothered. For example, you have been able to block all cookies or third-party cookies or have them wiped when you close your browser for a long time. Did you ever change those settings?

These new changes seem to me to be a good and necessary next step. Add to the Facebook spotlight and GDPR the fact that Google's Chrome browser in its July 2018 version 68 release will mark all HTTP sites as “not secure.” Having the HTTPS  ("S" for secure) in that URL will become important. If your site appears to users as NOT SECURE, you can expect people to click away from it.

Blog Followers

I write regularly on five blog sites besides this one. It is always nice to see stats rise on the number of hits and visitors that come to the sites. Some blog platforms allow you to have "followers" - people who are notified when you post something new.

I have noticed something the past few months on two blogs I own that are hosted by Wordpress. There has been a marked increase in followers. That is a good thing, right? Well, yes but ALL of these new followers list an @outlook.com email address. I'm suspicious.

In early 2018, Outlook.com had a reported 400 million active users.That's a lot os users, but that number hasn't increased as much as Gmail's statistics.

But what might these new followers be plotting? Are they bots? Fake Russian accounts hoping to get into my blog and use it for nefarious purposes?

So far, nothing odd has happened concerning these new followers.

Has anyone else reading this found something similar happening with their blog or website?