Our Attention Economy

eye

Money follows eyeballs. I saw that phrase on a slide in a conference presentation about marketing with social media.

Everyone wants your attention. Your children want your attention. Your spouse wants your attention. You want the attention of your students. Nothing new about that concept and there are plenty of ways to get someone's attention.

But it is a more recent way of thinking about attention to consider it as economics. I was listening to the audiobook of A Beautiful Mind recently. It's a book (and a good but highly romanticized film) about the mathematician John Nash. Nash received the Nobel Prize in Economics for his work on game theory as it was applied to economics. His ideas, presented in the 1950s, certainly must have seemed novel at the time, but 40 years later they seemed logical. That will probably be true of attention economics. There are already a good number of people writing about it.

Attention economics is an approach to the management of information that treats human attention as a scarce commodity. With attention as a commodity, you can apply economic theory to solve various information management problems.

Attention is a scarce commodity or resource because a person has only so much of it.

Not only in economics but in education and other areas that focused mental engagement that makes us attend to a particular item, leads to our decision on whether to act or not. Do we buy the item advertised? Do we do what mommy said to do? 

We are deep into the Information Age and content is so abundant and immediately available, that attention has become a limiting factor. There are so many channels and shows on the many versions of "television" competing for our attention that you may just decide not to watch at all. Or you may to decide to "cut the cord" and disconnect from many of them to make the choices fewer.

Designers know that if it takes the user too long to locate something, you will lose their attention. On web pages, that attention lasts anywhere from a few seconds to less than a second. If they can't find what they were looking for, they will find it through another source.

The goal then becomes to design methods (filters, demographics, cookies, user testing etc.) to make the first content a viewer sees relevant. Google and Facebook want you to see ads that are relevant to YOU. That online vendor wants the products on that first page to be things you are most interested in buying. Everything - and everyone - wants to be appealing to everyone.

In attention-based advertising, we measure the number of "eyeballs" by which content is seen.

"You can't please everyone." Really? Why not?

In the history section of the entry on "Attention Economy" on Wikipedia, it lists Herbert A. Simon as possibly being the first person to articulate the concept of attention economics. Simon wrote: "...in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it" (Simon 1971, pp. 40–41).

Simon was talking about the idea of information overload as an economic concept and that has led to business strategists such as Thomas H. Davenport to use the term "attention economy" (Davenport & Beck 2001).

Where will this lead? On the outer edges are those who speculate that "attention transactions" will replace financial transactions as the focus of our economic system (Goldhaber 1997Franck 1999).

Designers of websites, software, apps and any user interface already take into account attention, but information systems researchers have also adopted the idea. Will we see mechanism designs which build on the idea of creating property rights in attention?


The Information Literacy of Fake News

fake news

Pre- and post-election last fall, there were many stories in all types of media about "fake news." An article in The Chronicle asks "How Can Students Be Taught to Detect Fake News and Dubious Claims?" but I would say that non-students need even more education in this area. Of course, the real question is whether or not this is a teachable skill.

If you had asked me last January to define "fake news" I would have said it was a kind of satire or parody of mainstream journalism. The Onion online, or Saturday Night Live's news segment would fit that definition. Satire always has a bit of truth in it or it doesn't really work.

The Daily Show and Last Week Tonight with John Oliver and other shows and sites have blurred the line. They use real news and sometimes parody it, but sometimes they are closer to investigative journalism. They can edit together clips of a persons inconsistencies in views over the years and create a montage that shows someone who either has a terrible memory or is a liar. It may frighten some to hear it, but many young people and adults list shows like these as their main source for news.

The fake news that is really the focus of attention now are ones (almost exclusively online) that produce wholly fictionalized news stories. Those non-journalistic entities have a very powerful delivery system via social media like Facebook and twitter.

A Stanford University report published last year concluded that many students could not detect fake or misleading information online. They gave students from middle school to college tasks to see how well they could tell a native advertisement from a news article or identify a partisan website as biased or separate a verified social-media account from an unauthenticated one

A larger conclusion I see here is that faculty often assume that young people are fluent in or savvy about n social media in the same way that it is assumed that digital natives know how to use smartphones, websites, photos, video and other digital technology. Bad assumption or expectation.

I remember teaching lessons on determining the veracity of research sources before there was an Internet and after. That has been a part of literacy education since the time when books became more common. I'm sure it was a teachable moment pre-print when a parent told a child to ignore gossip and stories from certain people/courses.

The Stanford researchers said that we need to teach "civic online reasoning" which is something that goes beyond its need in academic settings.

In whose purview is this teaching? English teachers? Librarians? I would say it would only be effective if, like writing in the disciplines, it is taught by all teachers with a concentration on how it occurs in their field.

The science instructor needs to teach how to determine when science is not science. An easy task? No. Look at teaching the truth of climate science or evolution. It is controversial even if the science seems clear.

Napoleon Bonaparte is credited with saying that "History is a set of lies agreed upon." If that is true, how do we teach the truth about history past and the history that is unfolding before our eyes?

But we can't just say it's impossible to teach or assume someone else will take care of it. Information literacy is still a critical, difficult and overlooked set of skills to teach.


LinkedIn's Economic Graph

I wrote earlier about LinkedIn Learning, a new effort by the company to market online training. I said then that I did not think this would displace higher education any more than MOOCs or online education. If successful, it will be disruptive and perhaps push higher education to adapt sooner.

LinkedIn’s vision is to build what it calls the Economic Graph. That graph will be created using profiles for every member of the work force, every company, and "every job and every skill required to obtain those jobs."

That concept reminded me immediately of Facebook's Social Graph. Facebook introduced the term in 2007 as a way to explain how the then new Facebook Platform would take advantage of the relationships between individuals to offer a richer online experience. The term is used in a broader sense now to refer to a social graph of all Internet users.

social graph



LinkedIn Learning is seen as a service that connects user, skills, companies and jobs. LinkedIn acknowledges that even with about 9,000 courses on their Lynda.com platform they don't have enough content to accomplish that yet.

They are not going to turn to colleges for more content. They want to use the Economic Graph to determine the skills that they need content to provide based on corporate or local needs. That is not really a model that colleges use to develop most new courses. 

But Lynda.com content are not "courses" as we think of a course in higher ed. The training is based on short video segments and short multiple-choice quizzes. Enterprise customers can create playlists of content modules to create something course-like.

One critic of LinkedIn Learning said that this was an effort to be a "Netflix of education." That doesn't sound so bad to me. Applying data science to provide "just in time" knowledge and skills is something we have heard in education, but it has never been used in any broad or truly effective way.

The goal is to deliver the right knowledge at the right time to the right person.

One connection for higher ed is that the company says it is launching a LinkedIn Economic Graph Challenge "to encourage researchers, academics, and data-driven thinkers to propose how they would use data from LinkedIn to generate insights that may ultimately lead to new economic opportunities."

Opportunities for whom? LinkedIn or the university?

This path is similar in some ways to instances of adaptive-learning software that responds to the needs of individual students. I do like that LinkedIn Learning also is looking to "create" skills in order to fulfill perceived needs. Is there a need for training in biometric computing? Then, create training for it.

You can try https://www.linkedin.com/learning/. When I went there, it knew that I was a university professor and showed me "trending" courses such as "How to Teach with Desire2Learn," "Social Media in the Classroom" and  "How to Increase Learner Engagement." Surely, the more data I give them about my work and teaching, the more specific my recommendations will become.


Chasing the MUSE

ENIAC

DARPA has a program called MUSE (Mining and Understanding Software Enclaves) that is described as a "paradigm shift in the way we think about software." The first step is no less than for MUSE to suck up all of the world’s open-source software. That would be hundreds of billions of lines of code, which would then need to be organized it in at database.

A reason to attempt this is because the 20 billion lines of code written each year includes lots of duplication. MUSE will assemble a massive collection of chunks of code and tag it so that programmers can automatically be found and assembled. That means that someone who knows little about programming languages would be able to program.  

Might MUSE be a way to launch non-coding programming?

This can also fit in with President Obama’s BRAIN Initiative and it may contribute to the development of brain-inspired computers.

Cognitive technology is still emerging, but Irving Wladawsky-Berger, formerly of IBM and now at New York University, has said “We should definitely teach design. This is not coding, or even programming. It requires the ability to think about the problem, organize the approach, know how to use design tools.”


Massive Open Online Research

Not that we need another acronym or abbreviation in education, but it seems that the MOOR has arrived. UC San Diego has the first major online course that features “massive open online research” (MOOR). The course is “Bioinformatics Algorithms — Part 1” UC taught by computer science and engineering professor Pavel Pevzner and his graduate students.

The course is offered on Coursera and it combines research with a MOOC. Students will be given an opportunity to work on specific research projects under the leadership of prominent bioinformatics scientists from different countries, who have agreed to interact and mentor their respective teams. The goal of the course according to Pevzner is "to make you fall in love with bioinformatics."

The transition from learning to research can be a leap for students, and it can be difficult for students in isolated areas.

There is also an e-book, Bioinformatics Algorithms: An Active-Learning Approach, supporting the course. Professor Pevzner’s colleagues in Russia developed a content delivery system that integrates the e-book with hundreds of quizzes and dozens of homework problems.


Clicking Links in an Online Course and Student Engagement


tool use pie chart


Overall LMS tool use via blackboard.com



Blackboard's data science people have done a study on the data from all that student clicking in their learning management system and aggregated data from 70,000 courses at 927 colleges and universities in North America during the spring 2016 semester. That's big data.

But the results (reported this week on their blog) are not so surprising. In fact, their own blog post title on the results - "How successful students use LMS tools – confirming our hunches" - implies that we shouldn't be very surprised.

Let us look at the four most important LMS tools they found in predicting student grades. As someone who has taught online for fifteen years, it makes sense to me that the four tools are the ones most frequently used.

On top was the gradebook - not the actual grades, but that students who frequently check their grades throughout the semester tend to get better marks than do those who look less often. "The most successful students are those who access MyGrades most frequently; students doing poorly do not access their grades. Students who never access their grades are more likely to fail than students who access them at least once. There is a direct relationship at every quartile of use – and at the risk of spoiling results for the other tools, this is the only tool for which this direct trend exists. It appears that students in the middle range of grades aren’t impacted by their use of the tool."

Next was their use of course content. That makes sense. Actually, I would have thought it would be the number one predictor of success. Their data science group reports "An interesting result was that after the median, additional access is related to a decline in student grade; students spending more than the average amount of time actually have less likelihood of achieving a higher grade!" That's not so surprising. Students spending more time (slow or distracted readers; ones who skimmed and need to repeatedly return to material etc.) are probably having problems, rather than being more though. The student who spends an hour on a problem that should take 15 minutes is not showing grit.

This is followed by assessments (tests etc.) and assignments. "If students don’t complete quizzes or submit assignments for a course, they have lower grades than those who do so. This was not a surprising finding. What was surprising to me is that this wasn’t the strongest predictor of a student’s grade." Why is that surprising? Because it is what we use to evaluate and give those grades.Digging a bit deeper in that data, Blackboard concludes that time is a factor as a "...strong decline in grade for students who spend more than the average amount of time taking assessments. This is an intuitive result. Students who have mastered course material can quickly answer questions; those who ponder over questions are more likely to be students who are struggling with the material. The relationship is stronger in assessments than assignments because assessments measure all time spent in the assessment, whereas assignments doesn’t measure the offline time spent creating the material that is submitted. Regardless, this trend of average time spent as the most frequent behavior of successful students is consistent across both tools, and is a markedly different relationship than is found in other tools."

The fifth tool was discussion. I have personally found discussions to be very revealing of a student's engagement in the course. I also find that level of engagement/participation correlated to final grades, but that may be because I include discussions in the final grade. I know lots of instructors who do not require it or don't grade it or give it a small weight in the final grade.

An article on The Chronicle of Higher Education website is a bit unsure of all this big data's value. "But it’s hard to know what to make of the click patterns. Take the finding about grade-checking: Is it an existential victory for grade-grubbers, proving that obsessing over grades leads to high marks? Or does it simply confirm the common-sense notion that the best students are the most savvy at using things like course-management systems?"

And John Whitmer, director of learning analytics and research at Blackboard, says "I’m not saying anything that implies causality."

Should we be looking at the data from learning-management systems with an eye to increasing student engagement? Of course. Learning science is a new term and field and I don't think we are so far past the stage of collecting data that we have a clear learning path or solid course adjustments to recommend.

Measuring clicks on links in an LMS can easily be deceiving, as can measuring the time spent on a page or in the course. If you are brand new to the LMS, you might click twice as much as an experienced user. Spending 10 minutes on a page versus 5 minutes doesn't mean much either since we don't know if the time spent reading, rereading or going out to get a coffee.

It's a start, and I'm sure more will come from Blackboard, Canvas, MOOC providers (who will have even greater numbers, though in a very different setting) and others.