LinkedIn's Economic Graph

I wrote earlier about LinkedIn Learning, a new effort by the company to market online training. I said then that I did not think this would displace higher education any more than MOOCs or online education. If successful, it will be disruptive and perhaps push higher education to adapt sooner.

LinkedIn’s vision is to build what it calls the Economic Graph. That graph will be created using profiles for every member of the work force, every company, and "every job and every skill required to obtain those jobs."

That concept reminded me immediately of Facebook's Social Graph. Facebook introduced the term in 2007 as a way to explain how the then new Facebook Platform would take advantage of the relationships between individuals to offer a richer online experience. The term is used in a broader sense now to refer to a social graph of all Internet users.

social graph



LinkedIn Learning is seen as a service that connects user, skills, companies and jobs. LinkedIn acknowledges that even with about 9,000 courses on their Lynda.com platform they don't have enough content to accomplish that yet.

They are not going to turn to colleges for more content. They want to use the Economic Graph to determine the skills that they need content to provide based on corporate or local needs. That is not really a model that colleges use to develop most new courses. 

But Lynda.com content are not "courses" as we think of a course in higher ed. The training is based on short video segments and short multiple-choice quizzes. Enterprise customers can create playlists of content modules to create something course-like.

One critic of LinkedIn Learning said that this was an effort to be a "Netflix of education." That doesn't sound so bad to me. Applying data science to provide "just in time" knowledge and skills is something we have heard in education, but it has never been used in any broad or truly effective way.

The goal is to deliver the right knowledge at the right time to the right person.

One connection for higher ed is that the company says it is launching a LinkedIn Economic Graph Challenge "to encourage researchers, academics, and data-driven thinkers to propose how they would use data from LinkedIn to generate insights that may ultimately lead to new economic opportunities."

Opportunities for whom? LinkedIn or the university?

This path is similar in some ways to instances of adaptive-learning software that responds to the needs of individual students. I do like that LinkedIn Learning also is looking to "create" skills in order to fulfill perceived needs. Is there a need for training in biometric computing? Then, create training for it.

You can try https://www.linkedin.com/learning/. When I went there, it knew that I was a university professor and showed me "trending" courses such as "How to Teach with Desire2Learn," "Social Media in the Classroom" and  "How to Increase Learner Engagement." Surely, the more data I give them about my work and teaching, the more specific my recommendations will become.


Chasing the MUSE

ENIAC

DARPA has a program called MUSE (Mining and Understanding Software Enclaves) that is described as a "paradigm shift in the way we think about software." The first step is no less than for MUSE to suck up all of the world’s open-source software. That would be hundreds of billions of lines of code, which would then need to be organized it in at database.

A reason to attempt this is because the 20 billion lines of code written each year includes lots of duplication. MUSE will assemble a massive collection of chunks of code and tag it so that programmers can automatically be found and assembled. That means that someone who knows little about programming languages would be able to program.  

Might MUSE be a way to launch non-coding programming?

This can also fit in with President Obama’s BRAIN Initiative and it may contribute to the development of brain-inspired computers.

Cognitive technology is still emerging, but Irving Wladawsky-Berger, formerly of IBM and now at New York University, has said “We should definitely teach design. This is not coding, or even programming. It requires the ability to think about the problem, organize the approach, know how to use design tools.”


Standardized Testing for College Admissions

Standardized testing for college admissions is now more than 100 years old. In 1901, the first standardized tests were administered by the College Board for college admissions.

At that time, many universities had their own entrance exams, some requiring prospective students to come to campus for a week or more to take exams.

But different colleges meant different standards. Students needed to know what schools they were going to apply to in order to know what would be required. Some high schools offered separate instruction for students based on which colleges (or type of school) they hoped to attend.

Colleges on the other end might consider in admissions how well previous graduates of a particular high school had done, or might send faculty to visit high schools and rate them.

Most of us today are familiar with the SAT which we took for college admission, but that is 25 years further in the history until we arrive at that test.

From 1885 to 1900, groups including the National Education Association formed committees, discussed and argued and finally formed the College Board (initially as the College Entrance Examination Board) in the hope of standardizing curricula at high schools with a goal of making a college education accessible to a wider pool of applicants.

This may sound somewhat familiar to more modern efforts at standards, such as the current Common Core State Standards, including the standardized testing. Those first standardized college entrance exams (given to less than 1000 students) tested English, French, German, Latin, Greek, history, mathematics, chemistry, and physics.

They were essay tests, not multiple choice. They were read and scored by a team of experts in each subject. That first year, the readers sat at tables in the library of Columbia University. the grades were: Excellent, Good, Doubtful, Poor, or Very Poor. They were used for several decades but did not have wide acceptance.

We would classify those tests as achievement tests because they tested for students' proficiency in a subject. The College Board would next move to using aptitude tests meant to measure intelligence, though achievement test in specific subjects were still to be used for admissions testing.

I was surprised to read that the motivation to change was twofold. Since some high schools didn't teach some subjects (like ancient Greek) or prepare students specifically for college, an aptitude test should make the test more accessible for those students. But a much more surprising and less well known part of the history of the S.A.T. is that some proponents of aptitude/intelligence testing were college officials who were concerned about the rapid influx of immigrants being added to their student body.

A Columbia University dean worried that the high numbers of recent immigrants and their children would make the school "socially uninviting to students who come from homes of refinement," and its president described the 1917 freshman class as "depressing in the extreme," lamenting the absence of "boys of old American stock." These college officials believed that immigrants had less innate intelligence than old-blooded Americans, and hoped that they would score lower on aptitude tests, which would give the schools an excuse to admit fewer of them.

It was a small group of men who drove the use of standardized testing in America. In 1925, the College Board began to use a new, multiple-choice test. It was designed by a Princeton psychology professor named Carl Brigham. He modeled it on his work with Army intelligence tests.

This new test was known as the Scholastic Aptitude Test or simply the SAT and it was first offered in 1926. Currently, more than 1.6 million students take the SAT each year.

Students started taking a new SAT in March 2016. Most students in the class of 2017 and beyond will take the new SAT in the spring of 11th grade and again in the fall of 12th grade.

As I wrote earlier, students are already prepping for the new test, including prep online offered free by Khan Academy.







 


Clicking Links in an Online Course and Student Engagement


tool use pie chart


Overall LMS tool use via blackboard.com



Blackboard's data science people have done a study on the data from all that student clicking in their learning management system and aggregated data from 70,000 courses at 927 colleges and universities in North America during the spring 2016 semester. That's big data.

But the results (reported this week on their blog) are not so surprising. In fact, their own blog post title on the results - "How successful students use LMS tools – confirming our hunches" - implies that we shouldn't be very surprised.

Let us look at the four most important LMS tools they found in predicting student grades. As someone who has taught online for fifteen years, it makes sense to me that the four tools are the ones most frequently used.

On top was the gradebook - not the actual grades, but that students who frequently check their grades throughout the semester tend to get better marks than do those who look less often. "The most successful students are those who access MyGrades most frequently; students doing poorly do not access their grades. Students who never access their grades are more likely to fail than students who access them at least once. There is a direct relationship at every quartile of use – and at the risk of spoiling results for the other tools, this is the only tool for which this direct trend exists. It appears that students in the middle range of grades aren’t impacted by their use of the tool."

Next was their use of course content. That makes sense. Actually, I would have thought it would be the number one predictor of success. Their data science group reports "An interesting result was that after the median, additional access is related to a decline in student grade; students spending more than the average amount of time actually have less likelihood of achieving a higher grade!" That's not so surprising. Students spending more time (slow or distracted readers; ones who skimmed and need to repeatedly return to material etc.) are probably having problems, rather than being more though. The student who spends an hour on a problem that should take 15 minutes is not showing grit.

This is followed by assessments (tests etc.) and assignments. "If students don’t complete quizzes or submit assignments for a course, they have lower grades than those who do so. This was not a surprising finding. What was surprising to me is that this wasn’t the strongest predictor of a student’s grade." Why is that surprising? Because it is what we use to evaluate and give those grades.Digging a bit deeper in that data, Blackboard concludes that time is a factor as a "...strong decline in grade for students who spend more than the average amount of time taking assessments. This is an intuitive result. Students who have mastered course material can quickly answer questions; those who ponder over questions are more likely to be students who are struggling with the material. The relationship is stronger in assessments than assignments because assessments measure all time spent in the assessment, whereas assignments doesn’t measure the offline time spent creating the material that is submitted. Regardless, this trend of average time spent as the most frequent behavior of successful students is consistent across both tools, and is a markedly different relationship than is found in other tools."

The fifth tool was discussion. I have personally found discussions to be very revealing of a student's engagement in the course. I also find that level of engagement/participation correlated to final grades, but that may be because I include discussions in the final grade. I know lots of instructors who do not require it or don't grade it or give it a small weight in the final grade.

An article on The Chronicle of Higher Education website is a bit unsure of all this big data's value. "But it’s hard to know what to make of the click patterns. Take the finding about grade-checking: Is it an existential victory for grade-grubbers, proving that obsessing over grades leads to high marks? Or does it simply confirm the common-sense notion that the best students are the most savvy at using things like course-management systems?"

And John Whitmer, director of learning analytics and research at Blackboard, says "I’m not saying anything that implies causality."

Should we be looking at the data from learning-management systems with an eye to increasing student engagement? Of course. Learning science is a new term and field and I don't think we are so far past the stage of collecting data that we have a clear learning path or solid course adjustments to recommend.

Measuring clicks on links in an LMS can easily be deceiving, as can measuring the time spent on a page or in the course. If you are brand new to the LMS, you might click twice as much as an experienced user. Spending 10 minutes on a page versus 5 minutes doesn't mean much either since we don't know if the time spent reading, rereading or going out to get a coffee.

It's a start, and I'm sure more will come from Blackboard, Canvas, MOOC providers (who will have even greater numbers, though in a very different setting) and others.