On the Road to Learning With a GPS

map locationWhile I was driving in an unfamiliar neighborhood this week using my GPS I started thinking about how great it would be if there was something like a GPS for learning.

Of course, there is adaptive learning and adaptive teaching. That is the idea of delivering a custom learning experience that addresses the unique needs of an individual. It does that by using just-in-time feedback, pathways, and a library of resources. This is not a one-size-fits-all learning experience.

When I was studying education in college, we learned about creating a "roadmap" for learning. That was a long time ago when a paper roadmap was the way to travel. It was not adaptive. The user had to adapt. With the Internet came mapping websites. You put in a starting place and a destination and it finds a route. At first, there were no alternate routes, but when sites like Google Maps became available you could select alternatives. If you wanted to avoid a highway, you could drag the route around it.

Then came a GPS. We tend to call those devices a GPS but the Global Positioning System (GPS) is what makes that device work. It was developed in order to allow accurate determination of geographical locations by military and civil users using satellites. Those devices had all those mapping things, plus it went with you in the car and, most importantly, it was adaptive. If you went down the wrong street or a road was blocked, it adapted your route. 

When Google Maps, Apple Maps, Waze and other apps became available on smartphones, the makers of of GPS devives took a hit. Your phone knows where you are and where you want to go. It redirects you when needed. It gives immediate feedback on your progress and tells you your anticipated next step in advance.

Those first mapping programs weren't exactly what we would call artificial intelligence but today that is what drives mapping programs forward.

My driving notion of an AI/GPS for learning is here, though it's not quite a set-it-and-forget-it device yet. Several companies, such as Smart Sparrow, offer adaptive learning platforms. I know of a school using Pearson's program Aida Calculus (see video below) which connects multiple forms of AI to personalize learning. The program teaches students how to solve problems and gives real-world applications. Advanced AI algorithms have entered the education space.

Not every teacher or classroom has access to packaged programs for adaptive learning. In my pre-Internet teaching days, we called this approach individualized instruction which also focuses on the needs of the individual student. It was a teacher-centered approach that tried to shift teaching from specific one-need-at-a-time targets.

Over the years, the terms individualized instruction, differentiated teaching, adaptive learning and personalized learning have been sometimes used interchangeably.  They are all related because they describe learning design that attempts to tailor instruction to the understanding, skills, and interests of an individual learner. Today, it is through technology, but we can still use human intervention, curriculum design, pathways and some blend of these.

 

 

https://elearningindustry.com/adaptive-learning-for-schools-colleges

https://www.edsurge.com/research/reports/adaptive-learning-close-up

Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.

I Am In a Strange Loop

Magritte
    ”The Treachery of Images” by René Magritte says that "This is not a pipe." A strange loop.

I got a copy of Douglas Hofstadter's book, Godel, Escher, Bach: an Eternal Golden Braid, when I started working at NJIT in 2000. It was my lunch reading. I read it in almost daily spurts. I often had to reread because it is not light reading.

book coverIt was published in 1979 and won the 1980 Pulitzer Prize for general non-fiction. It is said to have inspired many a student to pursue computer science, though it's not really a CS book. It was further described on its cover as a "metaphorical fugue on minds and machines in the spirit of Lewis Carroll." In the book itself, he says "I realized that to me, Godel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object, and came up with this book."

I had not finished the book when I left NJIT and it went on a shelf at home. This summer I was trying to thin out my too-many books and I came upon it again with its bookmarker glowering at me from just past the halfway point in the pages. So, I went back to reading it. Still, tough going, though very interesting.

I remembered writing a post here about the book (it turned out to be from 2007) when I came upon a new book by Hofstadter titled I Am a Strange Loop. That "strange loop" was something he originally proposed in the 1979 book. This post is a rewrite and update on that older post.

The earlier book is a meditation on human thought and creativity. It mixes the music of Bach, the artwork of Escher, and the mathematics of Godel. In the late 1970s when he was writing interest in computers was high and artificial intelligence (AI) was still more of an idea than a reality. Reading Godel, Escher, Bach exposed me to some abstruse math (like undecidability, recursion, and those strange loops) but (here's where Lewis Carroll's "What the Tortoise Said to Achilles" gets referenced though some of you will say it's really a Socratic dialogue as in Xeno's fable, Achilles and the Tortoise) each chapter has a dialogue between the Tortoise and Achilles and other characters to dramatize concepts. Allusions to Bach's music and Escher's art (that loves paradox) also are used, as well as other mathematicians, artists, and thinkers. Godel's Incompleteness Theorem serves as his example of describing the unique properties of minds.

His new book back then was I Am a Strange Loop which focuses on the "strange loop" that he originally proposed in the 1979 book. I haven't read that book, but since I made it through the earlier volume (albeit in 18 years), I may give Strange Loop a try.

From what I read about the author, he was disappointed with how Godel, Escher, Bach (GEB) was received. It certainly got good reviews - and a Pulitzer Prize - but he felt that readers and reviewers missed what he saw as the central theme. I have an older edition but in a 20th-anniversary edition, he added that the theme was "a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"

I Am a Strange Loop focuses on that theme. In both books, he addresses "self-referential systems." (see link at bottom)

One thing that stuck with me from my first attempt at GEB is his using "meta" and defining it as meaning "about." Some people might say that it means "containing." Back on the early part of this century, I thought about that when I first began using Moodle as a learning management system. When you set up a new course in Moodle (and in other LMSs since then), it asks if this is a "metacourse." In Moodle, that means that it is a course that "automatically enrolls participants from other 'child' courses." Metacourses (AKA "master courses") feature all or part of the same content but customized to the enrollments of other sections. 

This was a feature used in big courses like English or Chemistry 101. In my courses, I thought more about having things like meta-discussions or discussions about discussions. My metacourse might be a course about the course. Quite self-referential.

I suppose it can get loopy when you start saying that if we have a course x, the metacourse X could be a course to talk about course x but would not include course x within itself. Though I suppose that it could.

Have I lost you?

Certainly, metatags are quite common on web pages, photos and for cataloging, categorizing and characterizing content objects. Each post on Serendipity35 is tagged with one or more categories and a string of keyword tags that help readers find similar content and help search engines make the post searchable.

A brief Q&A with Hofstadter published in Wired  in March 2007 about the newer book says that he considers the central question to him to be "What am I?."

His examples of "strange loops" include Escher's piece, "Drawing Hands," which shows two hands drawing each other, and the sentence, "I am lying."

Hofstadter gets spiritual in his further thinking and he finds at the core of each person a soul. He feels the "soul is an abstract pattern." Because he felt the soul is strong in mammals (weaker in insects), it brought him to vegetarianism.

He was considered to be an AI researcher, but he now thought of himself as a cognitive scientist.

Reconsidering GED, he decides that another mistake in that book's approach may have been not seeing that the human mind and smarter machines are fundamentally different. He has less of an interest in computers and claims that he always thought that his writing would "resonate with people who love literature, art, and music" more than the tech people.

If it has taken me much longer to finish Godel, Escher, Bach than it should, that makes sense if we follow the strange loop of Hofstadter's Law. ("It always takes longer than you expect, even when you take into account Hofstadter's Law.)



End Note: 
A self-referential situation is one in which the forecasts made by the human agents involved serves to create the world they are trying to forecast. http://epress.anu.edu.au/cs/mobile_devices/ch04s03.html. Social systems are self-referential systems based on meaningful communication. http://www.n4bz.org/gst/gst12.htm.

Are You Prometheus or Zeus?

Prometheus Brings Fire by Heinrich Friedrich Füger
Prometheus Brings Fire by Heinrich Friedrich Füger, 1817

 

Do you know the myth of Prometheus and his argument with Zeus?  I am reading Stephen Fry's books that are retellings of the myths of Ancient Greece, Mythos and the companion volume Heroes, and he has suggested that we are approaching a similar moment in our history. 

I don't know if you can see yourself as Jason aboard the Argo ques ting for the Golden Fleece, or as Oedipus solving the riddle of the Sphinx. But I think we might divide all of us into two groups by deciding on which side we stand when it comes to artificial intelligence as "personified" by any robot with a human appearance and advanced artificial intelligence.

The myth that applies is the story of Prometheus and his argument with Zeus.

In Greek mythology, Prometheus, whose name means "forethought" is credited with the creation of man from clay, and also the one who defies Zeus by stealing fire and giving it to humanity.

To humans, his theft is heroic. Fire, perhaps our first technology, enabled progress, civilization and the human arts and sciences.

Prometheus believed that humans needed and deserved fire. Zeus did not.

In Hesiod's version of the story of Prometheus and the theft of fire it is clear that Zeus withheld not only fire from humanity, but also "the means of life." Zeus feared that if humans had fire and all that it would lead them to, they would no longer need the gods.

Fry writes that “The Titan Prometheus made human beings in clay. The spit of Zeus and the breath of Athena gave them life. But Zeus refused to allow us to have fire. And I think fire means both literal fire – to allow us to become Bronze Age man, to create weapons and to cook meat. To frighten the fierce animals and become the strongest, physically and technically. But also the internal fire of self-consciousness and creativity. The divine fire. Zeus did not want us to have it. And Prometheus stole fire from heaven and gave it to man.”

If we think about a modern Prometheus, perhaps we can make him into a scientist who has created a very powerful android.

It is fitting that the word "android" was coined from the Greek andr-, meaning "man" (male, as opposed to anthr?p-, meaning human being) and the suffix -oid, meaning "having the form or likeness of. (We use "android" to refer to any human-looking robot, but a robot with a female appearance can also be referred to as a "gynoid.")

Our Prometheus the AI scientist is ready to give his android to the world. But his boss, Mr. Zeus, is opposed. What will happen when the android become sapient?" Zeus asks. Sapience is the ability of an organism or entity to act with judgment. "And what if these androids also become sentient?" Zeus asks. Sentience is the capacity to feel, perceive or experience subjectively.

Stephen Fry takes up that argument:

"In a hundred years time, we can guarantee there will be sapient beings on this earth that have been intelligently designed. You could call them robots, you could call them compounds of augmented biology and artificial intelligence, but they will exist. The future is enormous, it has never been more existentially transformative.

Will the Prometheus who makes the first piece of really impressive robotic AI – like Frankenstein or the Prometheus back in the Greek myth – have the question: do we give it fire? Do we give these creatures self-knowledge, self-consciousness? An autonomy that is greater than any other machine has ever had and would be similar to ours? In other words: shall we be Zeus and deny them fire because we are afraid of them? Because they will destroy us? The Greeks, and the human beings, did destroy the gods. They no longer needed them. And it is very possible that we will create a race of sapient beings who will not need us.”

So, are you like Prometheus wanting mankind to have these highly evolved robots? Or do you agree with Zeus that they will eventually destroy us?

 

Here is an excerpt concerning this idea from an interview Stephen Fry did in Holland.
(Full interview at https://dewerelddraaitdoor.bnnvara.nl/nieuws/de-twee-kanten-van-stephen-fry)