Will Your Instructional Designer Be AI?

cyborgRecently, I read an article about using artificial intelligence (AI) for the instructional design of courses. Initially, that frightened me. First of all, it might mean less work for instructional designers – which I have both been and run a department working with them. Second, it’s hard for me to imagine AI making decisions on pedagogy better than a designer and faculty member.

Of course, using AI for that kind of design is probably limited (at least at first) to automating some tasks like uploading documents and updating calendars rather than creating lessons. Then again, I know that AI is being used to write articles for online and print publications, so it is certainly possible.

I read another article asking “Is Artificial Intelligence the Next Stepping Stone for Web Designers?” and, of course, my concerns are the same – lost jobs and bad design.

Certainly, we are already using AI in websites, particularly in e-commerce applications. But using AI to actually design a website is very different.

Some companies have started to use AI for web design. A user answers some questions to start a design: pick an industry or category (portfolio, restaurant, etc.), enter a business name, add a subtitle/slogan/brand, upload a logo, enter an address, hours of operation, and so on. The AI may offer you a choice of templates and then in a few clicks, the basics of the site are created.

This is an extension of the shift 20 years to template-driven web design. Now, it is based on machine learning techniques with human intervention at the initial stage by providing their desired information and probably again after the site is created to fine-tune.

In my own instructional design work over the years, we have used templates for course shells. Standardizing the way courses look is a good thing in many ways. It makes it easier to do rapid development. That was certainly the situation in spring 2020 as school scrambled to move all their face to face courses online. A standard look also makes it easier for students to move from course to course. 

Though every course should not be the same, the structure and components can generally be the same. This is also useful if you are trying to have courses comply with standards such as Quality Matters or ADA accessibility standards

I do a lot of web design these days and many popular companies, such as Squarespace, are using AI and machine learning to get ordinary users started. Does design still require some human intervention? Absolutely. Does the human need to be a “designer”?  Clearly, the goal is to allow anyone to do a good job of creating a website without a designer.

I think there is an overlap between web design and course design. Add AI to either and the process can be made more efficient. I also think that you need people involved. For web design, it's a client and designer. For course design, it's a faculty member(s) and an ID. In my own work, I still find many people need someone with experience and training to create the website, but they can oftentimes maintain it on their own if the updates are simple. For courses, most faculty need help to create but generally not only can "maintain" the course but have to because the IDs can't always be there during a semester.

AI will change many industries and web and instructional design are certainly on the list of those industries. 

On the Road to Learning With a GPS

map locationWhile I was driving in an unfamiliar neighborhood this week using my GPS I started thinking about how great it would be if there was something like a GPS for learning.

Of course, there is adaptive learning and adaptive teaching. That is the idea of delivering a custom learning experience that addresses the unique needs of an individual. It does that by using just-in-time feedback, pathways, and a library of resources. This is not a one-size-fits-all learning experience.

When I was studying education in college, we learned about creating a "roadmap" for learning. That was a long time ago when a paper roadmap was the way to travel. It was not adaptive. The user had to adapt. With the Internet came mapping websites. You put in a starting place and a destination and it finds a route. At first, there were no alternate routes, but when sites like Google Maps became available you could select alternatives. If you wanted to avoid a highway, you could drag the route around it.

Then came a GPS. We tend to call those devices a GPS but the Global Positioning System (GPS) is what makes that device work. It was developed in order to allow accurate determination of geographical locations by military and civil users using satellites. Those devices had all those mapping things, plus it went with you in the car and, most importantly, it was adaptive. If you went down the wrong street or a road was blocked, it adapted your route. 

When Google Maps, Apple Maps, Waze and other apps became available on smartphones, the makers of of GPS devives took a hit. Your phone knows where you are and where you want to go. It redirects you when needed. It gives immediate feedback on your progress and tells you your anticipated next step in advance.

Those first mapping programs weren't exactly what we would call artificial intelligence but today that is what drives mapping programs forward.

My driving notion of an AI/GPS for learning is here, though it's not quite a set-it-and-forget-it device yet. Several companies, such as Smart Sparrow, offer adaptive learning platforms. I know of a school using Pearson's program Aida Calculus (see video below) which connects multiple forms of AI to personalize learning. The program teaches students how to solve problems and gives real-world applications. Advanced AI algorithms have entered the education space.

Not every teacher or classroom has access to packaged programs for adaptive learning. In my pre-Internet teaching days, we called this approach individualized instruction which also focuses on the needs of the individual student. It was a teacher-centered approach that tried to shift teaching from specific one-need-at-a-time targets.

Over the years, the terms individualized instruction, differentiated teaching, adaptive learning and personalized learning have been sometimes used interchangeably.  They are all related because they describe learning design that attempts to tailor instruction to the understanding, skills, and interests of an individual learner. Today, it is through technology, but we can still use human intervention, curriculum design, pathways and some blend of these.

 

 

https://elearningindustry.com/adaptive-learning-for-schools-colleges

https://www.edsurge.com/research/reports/adaptive-learning-close-up

Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.

I Am In a Strange Loop

Magritte
    ”The Treachery of Images” by René Magritte says that "This is not a pipe." A strange loop.

I got a copy of Douglas Hofstadter's book, Godel, Escher, Bach: an Eternal Golden Braid, when I started working at NJIT in 2000. It was my lunch reading. I read it in almost daily spurts. I often had to reread because it is not light reading.

book coverIt was published in 1979 and won the 1980 Pulitzer Prize for general non-fiction. It is said to have inspired many a student to pursue computer science, though it's not really a CS book. It was further described on its cover as a "metaphorical fugue on minds and machines in the spirit of Lewis Carroll." In the book itself, he says "I realized that to me, Godel and Escher and Bach were only shadows cast in different directions by some central solid essence. I tried to reconstruct the central object, and came up with this book."

I had not finished the book when I left NJIT and it went on a shelf at home. This summer I was trying to thin out my too-many books and I came upon it again with its bookmarker glowering at me from just past the halfway point in the pages. So, I went back to reading it. Still, tough going, though very interesting.

I remembered writing a post here about the book (it turned out to be from 2007) when I came upon a new book by Hofstadter titled I Am a Strange Loop. That "strange loop" was something he originally proposed in the 1979 book. This post is a rewrite and update on that older post.

The earlier book is a meditation on human thought and creativity. It mixes the music of Bach, the artwork of Escher, and the mathematics of Godel. In the late 1970s when he was writing interest in computers was high and artificial intelligence (AI) was still more of an idea than a reality. Reading Godel, Escher, Bach exposed me to some abstruse math (like undecidability, recursion, and those strange loops) but (here's where Lewis Carroll's "What the Tortoise Said to Achilles" gets referenced though some of you will say it's really a Socratic dialogue as in Xeno's fable, Achilles and the Tortoise) each chapter has a dialogue between the Tortoise and Achilles and other characters to dramatize concepts. Allusions to Bach's music and Escher's art (that loves paradox) also are used, as well as other mathematicians, artists, and thinkers. Godel's Incompleteness Theorem serves as his example of describing the unique properties of minds.

His new book back then was I Am a Strange Loop which focuses on the "strange loop" that he originally proposed in the 1979 book. I haven't read that book, but since I made it through the earlier volume (albeit in 18 years), I may give Strange Loop a try.

From what I read about the author, he was disappointed with how Godel, Escher, Bach (GEB) was received. It certainly got good reviews - and a Pulitzer Prize - but he felt that readers and reviewers missed what he saw as the central theme. I have an older edition but in a 20th-anniversary edition, he added that the theme was "a very personal attempt to say how it is that animate beings can come out of inanimate matter. What is a self, and how can a self come out of stuff that is as selfless as a stone or a puddle?"

I Am a Strange Loop focuses on that theme. In both books, he addresses "self-referential systems." (see link at bottom)

One thing that stuck with me from my first attempt at GEB is his using "meta" and defining it as meaning "about." Some people might say that it means "containing." Back on the early part of this century, I thought about that when I first began using Moodle as a learning management system. When you set up a new course in Moodle (and in other LMSs since then), it asks if this is a "metacourse." In Moodle, that means that it is a course that "automatically enrolls participants from other 'child' courses." Metacourses (AKA "master courses") feature all or part of the same content but customized to the enrollments of other sections. 

This was a feature used in big courses like English or Chemistry 101. In my courses, I thought more about having things like meta-discussions or discussions about discussions. My metacourse might be a course about the course. Quite self-referential.

I suppose it can get loopy when you start saying that if we have a course x, the metacourse X could be a course to talk about course x but would not include course x within itself. Though I suppose that it could.

Have I lost you?

Certainly, metatags are quite common on web pages, photos and for cataloging, categorizing and characterizing content objects. Each post on Serendipity35 is tagged with one or more categories and a string of keyword tags that help readers find similar content and help search engines make the post searchable.

A brief Q&A with Hofstadter published in Wired  in March 2007 about the newer book says that he considers the central question to him to be "What am I?."

His examples of "strange loops" include Escher's piece, "Drawing Hands," which shows two hands drawing each other, and the sentence, "I am lying."

Hofstadter gets spiritual in his further thinking and he finds at the core of each person a soul. He feels the "soul is an abstract pattern." Because he felt the soul is strong in mammals (weaker in insects), it brought him to vegetarianism.

He was considered to be an AI researcher, but he now thought of himself as a cognitive scientist.

Reconsidering GED, he decides that another mistake in that book's approach may have been not seeing that the human mind and smarter machines are fundamentally different. He has less of an interest in computers and claims that he always thought that his writing would "resonate with people who love literature, art, and music" more than the tech people.

If it has taken me much longer to finish Godel, Escher, Bach than it should, that makes sense if we follow the strange loop of Hofstadter's Law. ("It always takes longer than you expect, even when you take into account Hofstadter's Law.)



End Note: 
A self-referential situation is one in which the forecasts made by the human agents involved serves to create the world they are trying to forecast. http://epress.anu.edu.au/cs/mobile_devices/ch04s03.html. Social systems are self-referential systems based on meaningful communication. http://www.n4bz.org/gst/gst12.htm.