Synergy

Synergy is one of those words that caught fire with the general public in the late 20th century, especially in tech-related fields. In general, it is taken to mean the interaction of two or more things (organizations, substances, products, fields, etc.) that produces a greater effect when combined than separately. For example, if two colleges work jointly on a project, or the way there was cooperation between some pharmaceutical researchers in developing the COVID-19 vaccines.

But the word synergy is not a recent addition to the language. It appeared in the mid 19th century mostly in the field of physiology concerning the interaction of organs. It comes from the Greek sunergos meaning "working together" which comes from sun- ‘together’ + ergon ‘work’.

It has been used in diverse ways. In Christian theology, it was said that salvation involves synergy between divine grace and human freedom. I received a wedding engagement announcement that talked about the synergy between the two people. (They do both work in tech fields.)

The informational synergies which can be applied also in media involve a compression of transmission, access and use of information’s time, the flows, circuits and means of handling information being based on a complementary, integrated, transparent and coordinated use of knowledge.[32]

Walt Disney is given as an example of pioneering synergistic marketing. Back in the 1930s, the company licensed dozens of firms the right to use the Mickey Mouse character in products and ads. These products helped advertise their films. This kind of marketing is still used in media. For example, Marvel films are not only promoted by the company and the film distributors but also through licensed toys, games and posters. 

Shifting to tech, synergy can also be defined as the combination of human strengths and computer strengths. The use of robots and AI are clear synergies. If you read into information theory, you will find discussions of synergy when multiple sources of information taken together provide more information than the sum of the information provided by each source alone.

In education, synergy can be when schools and colleges, departments, disciplines, researchers,

Strong and Weak AI

programming
Image by Gerd Altmann from Pixabay

Ask several people to define artificial intelligence (AI) and you'll get several different definitions. If some of them are tech people and the others are just regular folks, the definitions will vary even more. Some might say that it means human-like robots. You might get the answer that it is the digital assistant on their countertop or inside their mobile device.

One way of differentiating AI that I don't often hear is by the two categories of weak AI and strong AI.

Weak AI (also known as “Narrow AI”) simulates intelligence. These technologies use algorithms and programmed responses and generally are made for a specific task. When you ask a device to turn on a light or what time it is or to find a channel on your TV, you're using weak AI. The device or software isn't doing any kind of "thinking" though the response might seem to be smart (as in many tasks on a smartphone). You are much more likely to encounter weak AI in your daily life.

Strong AI is closer to mimicking the human brain. At this point, we could say that strong AI is “thinking” and "learning" but I would keep those terms in quotation marks. Those definitions of strong AI might also include some discussion of technology that learns and grows over time which brings us to machine learning (ML), which I would consider a subset of AI.

ML algorithms are becoming more sophisticated and it might excite or frighten you as a user that they are getting to the point where they are learning and executing based on the data around them. This is called "unsupervised ML." That means that the AI does not need to be explicitly programmed. In the sci-fi nightmare scenario, the AI no longer needs humans. Of course that is not even close to true today as the AI requires humans to set up the programming, supply the hardware and its power. I don't fear the AI takeover in the near future.

But strong AI and ML can go through huge amounts of data that it is connected to and find useful patterns. Some of those are patterns and connections that itis unlikely that a human would find. Recently, you may have heard of the attempts to use AI to find a coronavirus vaccine. AI can do very tedious, data-heavy and time-intensive tasks in a much faster timeframe.

If you consider what your new smarter car is doing when it analyzes the road ahead, the lane lines, objects, your speed, the distance to the car ahead and hundreds or thousands of other factors, you see AI at work. Some of that is simpler weak AI, but more and more it is becoming stronger. Consider all the work being done on autonomous vehicles over the past two decades, much of which has found its way into vehicles that still have drivers.

Of course, cybersecurity and privacy become key issues when data is shared. You may feel more comfortable in allowing your thermostat to learn your habits or your car to learn about how you drive and where you drive than you are about letting the government know that same data. Discover the level of data we share online dong financial operations or even just our visiting sites, making purchases and our search history, and you'll find the level of paranoia rising. I may not know who you are reading this article, but I suspect someone else knows and is more interested in knowing than me.

Event-Based Internet

Event-based Internet is going to be something you will hear more about this year. Though I had heard the term used, the first real application of it that I experienced was a game. But don't think this is all about fun and games. Look online and you will find examples of event-based Internet biosurveillance and event-based Internet robot teleoperation systems and other very sophisticated uses, especially connected to the Internet of Things (IoT).

HQWhat did more than a million people do this past Sunday night at 9pm ET? They tuned in on their mobile devices to HQ Trivia, a game show, on their phones.  

For a few generations that have become used to time-shifting their viewing, this real-time game is a switch. 

The HQ app has had early issues in scaling to the big numbers with game delays, video lag and times when the game just had to be rebooted. But it already has at least one imitator called "The Q" which looks almost identical in design, and imitation is supposed to be a form of flattery.

This 12-question trivia quiz has money prizes. Usually, the prize is $2000, but sometimes it jumps to $10 or $20K. But since there are multiple survivors of the 12 questions that win, the prizes are often less than $25 each.

Still, I see the show's potential (Is it actually a "show?") Business model? Sponsors, commercial breaks, sponsors and product placement in the questions, answers and banter in-between questions.

The bigger trend here is that this is a return to TV "appointment viewing."  Advertisers like that and it only really occurs these days with sports, some news and award shows. (HQ pulled in its first audience of more than a million Sunday during the Golden Globe Awards, so...) 

And is there some education connection in all this?  Event-based Internet, like its TV equivalent, is engaging. Could it bring back "The Disconnected" learner?  

I found a NASA report on "Lessons Learned from Real-Time, Event-Based Internet Science Communications."  This report is focused on sharing science activities in real-time in order to involve and engage students and the public about science.

Event-based distributed systems are being used in areas such as enterprise management, information dissemination, finance,
environmental monitoring and geo-spatial systems.

Education has been "event-based" for hundreds of years. But learners have been time-shifting learning via distance education and especially via online learning for only a few decades. Event-based learning sounds a bit like hybrid or blended learning. But one difference is that learners are probably not going to tune in and be engaged with just a live lecture. Will it take a real event and maybe even gamification to get live learning? 

In all my years teaching online, I have never been able to have all of a course's student attend a "live" session either because of time zone differences, work schedules or perhaps content that just wasn't compelling enough.

What will "Event-based Learning" look like?

Edge Computing

I learned about edge computing a few years ago. It is a method of getting the most from data in a computing system by performing the data processing at the "edge" of the network. The edge is near the source of the data, not at a distance. By doing this, you reduce the communications bandwidth needed between sensors and a central datacenter. The analytics and knowledge generation are right at or near the source of the data.

The cloud, laptops, smartphones, tablets and sensors may be new things but the idea of decentralizing data processing is not. Remember the days of the mainframe computer?

The mainframe is/was a centralized approach to computing. All computing resources are at one location. That approach made sense once upon a time when computing resources were very expensive - and big. The first mainframe in 1943 weighed five tons and was 51 feet long. Mainframes allowed for centralized administration and optimized data storage on disc.

Access to the mainframe came via "dumb" terminals or thin clients that had no processing power. These terminals couldn't do any data processing, so all the data went to, was stored in, and was crunched at the centralized mainframe.

Much has changed. Yes, a mainframe approach is still used by businesses like credit card companies and airlines to send and display data via fairly dumb terminals. And it is costly. And slower. And when the centralized system goes down, all the clients go down. You have probably been in some location that couldn't process your order or or access your data because "our computers are down."

It turned out that you could even save money by setting up a decentralized, or “distributed,” client-server network. Processing is distributed between servers that provide a service and clients that request it. The client-server model needed PCs that could process data and perform calculations on their own in order to have applications to be decentralized. 

Google car

Google Co-Founder Sergey Brin shows U.S. Secretary of State John Kerry the computers inside one of
Google's self-driving cars - a data center on wheels. June 23, 2016. [State Department photo/ Public Domain]

Add faster bandwidth and the cloud and a host of other technologies (wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing) and you can compute at the edge.  Terms like local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlets, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented reality and more that I haven't encountered yet have all come into being.

Recently, I heard a podcast on "Smart Elevators & Self-Driving Cars Need More Computing Power" that got me thinking about the millions of objects (Internet of Things) connecting to the Internet now. Vehicles, elevators, hospital equipment, factory machines, appliances and a fast-growing list of things are making companies like Microsoft and GE put more computing resources at the edge of the network. 

This is computer architecture for people not things. In 2017, there were about 8 billion devices connect to the net. It is expected that in 2020 that number will be 20 billion. Do you want the sensors in your car that are analyzing traffic and environmental data to be sending it to some centralized resource - or doing it in your car? Milliseconds matter in avoiding a crash. You need the processing to be done on the edge. Cars are "data centers on wheels." 

Remember the early days of the space program? All the computing power was on Earth. You have no doubt heard the comparison that the iPhone in your pocket has hundreds or even thousands of times the computing power of the those early spacecraft. That was dangerous, but it was the only option. Now, much of the computing power is at the edge - even if the vehicle is also at the edge of our solar system. And things that are not as far off as outer space - like a remote oil pump - also need to compute at the edge rather than needing to connect at a distance to processing power. 

Plan to spend more time in the future at the edge.

Monetizing Your Privacy

data

Data is money. People are using your data to make money. What if you could sell, rather than give away, your private data? Is it possible that some day your data might be more valuable than the thing that is supplying your data?

John Ellis deals with big data and how it may change business models. He was Ford Motor Company’s global technologist and head of the Ford Developer Program, so cars are the starting place for the book, but beyond transportation, insurance, telecommunications, government and home building are all addressed. His book, The Zero Dollar Car: How the Revolution in Big Data will Change Your Life, is not as much about protecting our data as users, as it is about taking ownership of it. In essence, he is suggesting that users may be able to "sell" their data to companies (including data collectors such as Google) in exchange for free or reduced cost services or things.

I'm not convinced this will lead to a free/zero dollar car, but the idea is interesting. You are already allowing companies to use your data when you use a browser, shop at a website, use GPS on your phone or in a car device. The growth of the Internet of Things (IoT) means that your home thermostat, refrigerator, television and other devices are also supplying your personal data to companies. And many companies, Google, Apple and Amazon are prime examples, use your data to make money. Of course, this is also why Google can offer you free tools and services like Gmail, Documents etc.

Ellis talks about a car that pays for itself with your use and data, but the book could also be the Zero Dollar House or maybe an apartment. Big technology companies already profit from the sale of this kind of information. Shouldn't we have that option?

Duly noted: the data we supply also helps us. Your GPS or maps program uses your route and speed to calculate traffic patterns and reroute or notify you. The health data that your Apple watch or fitness band uploads can help you be healthier, and in aggregate it can help the general population too.

I remember years ago when Google began to predict flu outbreaks in geographic areas based on searches for flu-related terms. If all the cars on the road were Net-enabled and someone was monitoring the ambient temperature and their use of windshield wipers, what could be done with that data? What does an ambient temperature of 28 F degrees and heavy wiper use by cars in Buffalo, New York indicate? Snowstorm. Thousands or millions of roaming weather stations. And that data would be very useful to weather services and companies (like airlines and shipping companies) that rely on weather data - and are willing to pay for that data.

Am I saying that you should give up your privacy for money or services? No, but you should have that option - and the option to keep all your data private.

Machine Learning :: Human Learning

AI - “artificial intelligence” - was introduced at a science conference at Dartmouth University in 1956. Back then it was a theory, but in the past few decade it has become something beyond theoretical. been less theory and more in practice than decades before.

The role of AI in education is still more theory than practice.

A goal in AI is to get machines to learn. I hesitate to say "think" but that is certainly a goal too. I am reading The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution currently and in that history there is a lot of discussion of the people trying to get machines to do more than just compute (calculate) but to learn from its experiences without requiring a human to program those changes. The classic example is the chess playing computer that gets better every time it wins or loses. Is that "learning?"

But has it had an impact on how you teach or how your students learn?

It may have been a mistake in the early days of AI and computers that we viewed the machine as being like the human brain. It is - and it isn't.

But neuroscientists are now finding that they can also discover more about human learning as a result of machine learning. An article on opencolleges.edu.au points to several interesting insights from the machine and human learning research that may play a role in AI in education.

One thing that became clear is that physical environment is something humans learn easier than machines. After a child has started walking or opened a few doors or drawers or climbed a few stairs, she learns how to do it. Show her a different door, drawer, or a spiral staircase and it doesn't make much of a difference. A robot equipped with some AI will have a much steeper learning curve to learn these simple things. It also has a poor sense of its "body." Just watch any videos online of humanoid robots trying to do those things and you'll see how difficult it is for a machine.


Then again, it takes a lot longer for humans to learn how to drive a car on a highway safely. And even when it is learned, our attention, or lack thereof, is a huge problem. AI in vehicles is learning how to drive fairly rapidly, and its attention is superior to human attention. Currently, it is still a fall back human error in mist cases, but that will certainly change in a decade or two. I learned to parallel park a car many years ago and I am still lousy at doing it. A car can do it better than me.

Although computers can do tasks they are programmed to do without any learning curve, for AI to work they need to learn by doing - much like humans. The article points out that AI systems that traced letters with robotic arms had an easier time recognizing diverse styles of handwriting and letters than visual-only systems. 

AI means a machine gets better at a task the more it does it, and it can also apply that learning to similar but not identical situations. You can program a computer to play notes and play a series of notes as a song, but getting it to compose real music requires AI.

Humans also learn from shared experiences. A lot of the learning in a classroom comes from interactions between the teacher and students and student to student. This makes me feel pretty confident in the continued need for teachers in the learning process.

One day, I am sure that machines will communicate with each other and learn from each other. This may be part of the reason that some tech and learning luminaries like Elon Musk have fears about AI

I would prefer my smart or autonomous vehicle to "talk" to other vehicles on the roads nearby and share information on traffic, obstructions and vehicles nearby with those quirky human drivers only.

AI built into learning systems, such as an online course, could guide the learning path and even anticipate problems and offer corrections to avoid them. Is that an AI "teacher" or the often-promoted "guide on the side?"

This year on the TV show Humans, one of the human couples goes for marriage counseling with a "synth" (robot). She may be a forerunner of a synth teacher.

Humans TV
The counselor (back to us) can read the husband's body language and knows he does not like talking to a synth marriage counselor.

 

Is Education Ready to Connect to the Internet of Things?

IoT

I first encountered the term "Internet of Things" (IoT) in 2013. It is the idea that "things" (physical devices) would be connected in their own network(s). The talk was that things in your home, office and vehicles would be wirelessly connected because they were embedded with electronics, software, sensors, actuators, and network connectivity. Things would talk to things. Things would collect and exchange data.

Some of the early predictions seemed rather silly. Taking a tagged carton of milk out of the refrigerator and not putting it back would tell my food ordering device (such as an Amazon Echo) that I was out of milk. My empty Bluetooth coffee mug would tell the Keurig coffeemaker to make me another cup.

But the "smart home" - something that pre-dates the Internet - where the HVAC knew I was almost home and adjusted the temperature off the economical setting to my comfort zone and maybe put on the front light and started dinner, was rather appealing.

In 2014, the EDUCAUSE Learning Initiative (ELI) published its “7 Things You Should Know About the Internet of Things. The Internet of Things (and its annoying abbreviation of IoT) sounded rather ominous as I imagined them proliferating across our social and physical landscapes. The ELI report said “the IoT has its roots in industrial production, where machine-to-machine communication enabled the manufacture of complex items, but it is now expanding in the commercial realm, where small monitoring devices allow such things as ovens, cars, garage doors, and the human heartbeat to be checked from a computing device.”

Some of the discussions have also been about considerations of values, ethics and ideology, especially if you consider the sharing of the data gathered. 

As your watch gathers data about your activity, food intake and heart rate, it has valuable data about your health. I do this on my Fitbit with its app. Perhaps you share that with an online service (as with the Apple watch & Apple itself) in order to get further feedback information about your health and fitness and even recommendations about things to do to improve it. If you want a really complete analysis, you are asked (hopefully) to share your medications, health history etc. Now, what if that is shared with your medical insurer and your employer?

Might we end up with a Minority Report of predictive analytics that tell the insurance company and your employer whether or not you are a risk?

Okay, I made a leap there, but not a huge one. 

This summer, EDUCAUSE published a few articles on IoT concerning higher education and the collaboration required for the IoT to work. I don't see education at any level really making significant use of IoT right now, though colleges are certainly gathering more and more data about students. That data might be used to improve admissions. Perhaps, your LMS gathers data about student activity and inactivity and can use it to predict what students need academic interventions.

It's more of an academic challenge to find things that can be used currently.

History Lesson: Way back in 1988, Mark Weiser talked about computers embedded into everyday objects and called this third wave "ubiquitous computing." Pre-Internet, this was the idea of many computers, not just the one on your desk, for one person. Add ten years and in 1999, Keven Ashton posited a fourth wave which he called the Internet of Things.

Connection was the key to both ideas. It took another decade until cheaper and smaller processors and chipsets, growing coverage of broadband networks, Bluetooth and smartphones made some of the promises of IoT seem reasonable. 

Almost any thing could be connected to the Internet. We would have guessed at computers of all sizes, cars and appliances. I don't think things such as light bulbs would have been on anyone's list.

Some forecasters predict 20 billion devices will be connected by 2020; others put the number closer to 40-100+ billion connected devices by that time.

And what will educators do with this?


Cognizant Computing in Your Pocket (or on your wrist)

Two years ago, I wrote about the prediction that your ever-smarter phone will be smarter than you by 2017. We are half way there and I still feel superior to my phone - though I admit that it remembers things that I can't seem to retain, like my appointments, phone numbers, birthdays and such.

The image I used on that post was a watch/phone from The Jetsons TV show which today might make you think of the Apple watch which is connected to that ever smarter phone.

But the idea of cognizant computing is more about a device having knowledge of or being aware of your personal experiences and using that in its calculations. Smartphones will soon be able to predict a consumer’s next move, their next purchase or interpret actions based on what it knows, according to Gartner, Inc.

This insight will be performed based on an individual’s data gathered using cognizant computing — "the next step in personal cloud computing.

“Smartphones are becoming smarter, and will be smarter than you by 2017,” said Carolina Milanesi, Research Vice President at Gartner. “If there is heavy traffic, it will wake you up early for a meeting with your boss, or simply send an apology if it is a meeting with your colleague."

The device will gather contextual information from your calendar, its sensors, your location and all the personal data  you allow it to gather. You may not even be aware of some of that data it is gathering. And that's what scares some people.

watchWhen your phone became less important for making phone calls and added apps, a camera, locations and sensors, the lines between utility, social, knowledge, entertainment and productivity got very blurry.

But does it have anything to do with learning?

Researchers at Pennsylvania State University already announced plans to test out the usefulness in the classroom of eight Apple Watches this summer.

Back in the 1980s, there was much talk about Artificial Intelligence (AI). Researchers were going to figure out how we (well, really how "experts") do what they do and reduce those tasks to a set of rules that a computer could follow. The computer could be that expert. The machine would be able to diagnose disease, translate languages, even figure out what we wanted but didn’t know we wanted. 

AI got lots of  VC dollars thrown at it. But it was not much of a success.

Part of the (partial) failure can be attributed to a lack of computer processing power at the right price to accomplish those ambitious goals. The increase in power, drop in prices and the emergence of the cloud may have made the time for AI closer.

Still, I am not excited when I hear that this next phase will allow "services and advertising to be automatically tailored to consumer demands."

Gartner released a newer report on cognizant computing that continues that idea of it being "the strongest forces in consumer-focused IT" in the next few years.

Mobile devices, mobile apps, wearables, networking, services and the cloud is going to change educational use too, though I don't think anyone has any clear predictions. 

Does more data make things smarter? Sometimes.

Will the Internet of Things and big data converge with analytics and make things smarter? Yes.

Is smarter better? When I started in education 40 years ago, I would have quickly answered "yes," but my answer is less certain these days.

 


How Netflix Is Using Your Taste in Movies

With all the attention that privacy (or the lack of it) received in 2013, there are some forms of snooping that you might actually appreciate.

If you use sites like Gmail or Facebook, you probably know that they are mining your data and usage in order to give you ads that are better-suited to your interests. I know that may not sound so great, but it is an improvement on getting ads that are completely irrelevant to your life. But sites like Amazon and Netflix are also mining your data, not to show you ads but to show you more relevant recommendations. Their systems have become more sophisticated and more granular at judging your preferences. On Netflix, they look at the genres that you watch. In the early days, this would have been broad categories like drama, comedy, action, romantic etc. But their genres have gotten very specific, sometimes to a humorous degree - as in Fight-the-System Documentaries, Period Pieces About Royalty Based on Real Life, Foreign Satanic Stories from the 1980s.

Alexis Madrigalis a senior editor at The Atlantic, where he oversees the Technology Channel. He's the author of Powering the Dream: The History and Promise of Green Technology. He wondered how Netflix with its 40 million users (more than HBO now) decides on the genres that a film fits into in order to quantify your personal tastes.

We sometimes call this taxonomy or folksonomy, when it is done by "the crowd."

His interest turned into a bit of an obsession and then he discovered that he could scrape (capture) each and every microgenre that Netflix's algorithm has ever created. He discovered that Netflix possesses not several hundred genres, or even several thousand, but 76,897 unique ways to describe types of movies.

You may not be a movie fan, Netflix subscriber or even very interested in Big Data - but organizations (companies anf colleges) are very interested in knowing about you. Therefore, you should have some interest and understanding of what is being done to you.

Madrigal wrote a script to pull that data and then spent several weeks understanding, analyzing, and reverse-engineering how Netflix's vocabulary and grammar work. He realized that there was no way he could go through all those genres by hand, so he used a piece of software called UBot Studio to incrementally go through each of the Netflix genres and copy them to a file. 

He discovered many very specific genres in the system, such as:

Emotional Independent Sports Movies

Spy Action & Adventure from the 1930s

Cult Evil Kid Horror Movies

Sentimental set in Europe Dramas from the 1970s

Romantic Chinese Crime Movies

Mind-bending Cult Horror Movies from the 1980s

Time Travel Movies starring William Hartnell

Visually-striking Goofy Action & Adventure

British set in Europe Sci-Fi & Fantasy from the 1960s

Critically-acclaimed Emotional Underdog Movies

Perry MasonIn the article he wrote forThe Atlantic, there is a generator which will give you many of the genres. It is an imperfect system. He found an oddly large number of genres for the actor Raymond Burr (best known for an old TV show Perry Mason). Why? 

He explains: "The vexing, remarkable conclusion is that when companies combine human intelligence and machine intelligence, some things happen that we cannot understand. Let me get philosophical for a minute. In a human world, life is made interesting by serendipity," Yellin told me. "The more complexity you add to a machine world, you're adding serendipity that you couldn't imagine. Perry Mason is going to happen. These ghosts in the machine are always going to be a by-product of the complexity. And sometimes we call it a bug and sometimes we call it a feature. Perry Mason episodes were famous for the reveal, the pivotal moment in a trial when Mason would reveal the crucial piece of evidence that makes it all makes sense and wins the day. Now, reality gets coded into data for the machines, and then decoded back into descriptions for humans. Along the way, humans ability to understand what's happening gets thinned out. When we go looking for answers and causes, we rarely find that aha! evidence or have the Perry Mason moment. Because it all doesn't actually make sense. Netflix may have solved the mystery of what to watch next, but that generated its own smaller mysteries. Sometimes we call that a bug, and sometimes we call it a feature."


Gartner's Trends List for 2012

Now that we are midway through 2012, the Gartner Symposium gives us the IT research firm's tech trends for the year. As usual, it's a mix of emerging and existing technologies.

According to campustechnology.com, the top ten includes:

    The use of media tablets and other small-form-factor computing devices;

    The continuing explosion of mobile-centric applications and interfaces;

    The growth of app stores and marketplaces;

    Contextual and social user experience;

    The "internet of things";

    Next-generation analytics;

    The proliferation of big data;

    The smarter use of in-memory computing;

    The recognition of the value of extreme low-energy servers; and

    The continued acceptance of cloud computing.


Technologies on the Horizon That Will Impact Higher Education

Every year I read the the NMC Horizon Report to see what they predict will be the technologies that will have an impact in higher education. The 2012 report was released jointly by NMC and the EDUCAUSE Learning Initiative.

The report looks at technologies that will have an impact in the next five years (near term, mid-term, and longer term). They also examine "critical challenges" facing education.

The near-term technologies are mobile apps and tablet computing which are changing the nature of computing for end users and developers. The larger suites of integrated software are being replaced by free and cheap apps that focus on doing one or a few things well and integrate with other apps easily. And, though I agree that mobile computing, tablets (iPads etc.) are influencing teaching and learning, I don't see any clear impact yet.

Two technologies that are more mid-term (2 or 3 years from having a major impact) are game-based learning and learning analytics.

"Learning analytics" may have more of an impact on the administration and decision-making levels than directly in the classroom. The term usually refers to both traditional strategies used in student retention, and newer methods of aggregating data from many sources to get a picture of how learning is happening and what is working best. If you have read previous reports, you know that long-term items, like learning analytics, often move closer in time if they grab traction in schools. Learning analytics, for example, seems to have benefited from some funded initiatives in the past few years.

Game-based learning has been on the list for a few years, but I don't feel like it has gotten any closer to making an impact. At one time, virtual worlds was a somewhat related technology, and that has almost dropped off the educational planet the past two years. Both technologies are ones that offer the possibility of using collaboration, problem solving, communication, critical thinking, and digital literacy. But the results have not been all that impressive. Online social games have certainly been big the past few years, but their application or any transference to learning is still lacking.

If you're writing your proposals for grants and conferences, you might want to get a jump on those technologies that are still four or five years out. Two to look into are gesture-based computing and the "Internet of Things."

Gesture-based computing fits right into gaming and mobile devices. Think of Wii games and swiping that smartphone or tablet.  The ideas driving its use in education is that it can transcend linguistic and cultural limitations. Watch a two year-old play with an iPad and you realize that not relying on language or any specific language might be a major plus. These devices also encourage interaction and just plain old play as a way to explore and learn. That is certainly true with younger students, but not lost on older and adult learners. Android and Apple smart phones and tablets, the Microsoft Surface, ActivPanel, Nintendo Wii and Microsoft Kinect systems, are all playing with these ideas.



Internet of Things

The "Internet of Things" is further out there in years and in my ability to explain exactly what it means, or might one day mean, to education. It is about the evolution of smart objects which are interconnected items in ways that make the line between the physical object and digital information very blurry or invisible.

You should look into IPv6 and how it is used in small devices with unique identifiers. You probably know a bit about RFID devices that are used in stores to track products, purchases and inventory. They store data and they can send that information to external devices via the Internet. We can already use them in schools to do similar things like tracking attendance, research subjects, and equipment. But how it might be used for learning is about as blurry as the line it is erasing.

Which brings us to challenges. In brief, these are the five technology-oriented challenges facing higher education according to the report.

1) Economic pressures from new education models, forcing traditional institutions to control costs while maintaining services;

2) The need for new forms of scholarly corroboration as traditional peer review and approval become more and more difficult to apply in light of new methods of dissemination;

3) The growing importance of digital literacy and lack of digital literacy preparation among faculty;

4) Traditional institutional barriers to the adoption of new technologies; and

5) Technological upheavals that are putting libraries "under tremendous pressure to evolve new ways of supporting and curating scholarship."

In my educational world, economics is very important, but the barriers of 3 and 4 are much tougher to overcome.


The Internet of Things




It has been five years since Tim O'Reilly pitched the idea of Web 2.0. The term caught on in a big way. In fact, almost everything seems to be labeled 2.0 these days.



Recently, O'Reilly and John Battelle (they run the Web 2.0 conference - now a Web 2.0 Summit - together along with TechWeb) released a white paper called " Web Squared: Web 2.0 Five Years On."



One thing the paper examines is how the social web might intersect with the Internet of Things. Not familiar with that? Don't feel badly. It hasn't caught popular fire yet, so there's still time for you to read up and be able to chat about it at the first faculty meeting.



The Internet of Things is concerned with real world objects that are connected to the Internet. The concept of the internet of things is associated with the Auto-ID Labs. The real world objects that are connected to the Net might be household appliances, cars, books or any electronic, "smart," or RFID-enabled object.



If this sounds like something from The Jetsons, you're on the right track. Here's what was said recently on ReadWriteWeb about one smart appliance.



The Internet fridge is probably the most oft-quoted example of what the Internet of Things - when everyday objects are connected to the Internet - will enable. Imagine a refrigerator (so the story goes) that monitors the food inside it and notifies you when you're low on, for example, milk. It also perhaps monitors all of the best food websites, gathering recipes for your dinners and adding the ingredients automatically to your shopping list. This fridge knows what kinds of foods you like to eat, based on the ratings you have given to your dinners. Indeed the fridge helps you take care of your health, because it knows which foods are good for you and which clash with medical conditions you have. And that's just part of the sci-fi story of the Internet fridge.


Okay, so my home gets smarter and more connected. What about schools?



The O'Reilly paper defines "web squared" as "web meets world." (Also the idea that the web is growing exponentially.) Something that still holds over from that Web 2.0 concept from 2004 is the belief that this new web would be harnessing collective intelligence. In 2009, that now includes mobile and internet-connected objects.



Smartphones with a microphone, camera, motion sensor, proximity sensor, and location sensor (GPS) are powerful "things" that can be used in the classroom but can be taken home and into the field.



Sure, RFID tags can keep track of books in the bookstore or storeroom, but, hopefully, educators and students will come up with more than a supermarket approach to the Internet of Things.



Aren't classrooms supposed to be about collective intelligence? Isn’t intelligence, at least partially, the characteristic that allows an organism to learn from and respond to its environment?



I doubt that I will have the money to make it out to the Summit in San Francisco this October (it's by invitation), but I hope there are some educators attending (and presenting?). Schools can't afford to be left on the beach just watching another technology wave crest.



By the way, the Internet of Things is not the Web of Things. It's a very tangled web we are weaving...



Download the Web Squared White Paper (PDF, 1.3MB)



Watch the Web Squared Webcast