Predictive, Descriptive and Prescriptive Analytics and the Movies


Desk Set still



Tracy, Hepburn and EMERAC in DESK SET, 1957

I watched the 1959 film Desk Set over the holiday break. It is set within the TV network FBN, Federal Broadcasting Network (the exterior shots were done at Rockefeller Center, headquarters of NBC). Bunny Watson (Katharine Hepburn) is in charge of its reference library, which is responsible for researching and answering questions on almost any topic. With a secret merger pending, and anticipating a lot more demand for the department, the network boss has ordered two new computers.



Of course, this being 1959, the computers are called "electronic brains" in the film and they are huge. Richard Sumner (Spencer Tracy) is the inventor of them and they are called EMERAC. That name is some wordplay from ENIAC - the Electronic Numerical Integrator And Computer that was the first electronic general-purpose computer.



I also saw a new film over the break - The Imitation Game based on the book Alan Turing: The Enigma. The ENIAC computer was considered to be "Turing-complete" - a term from the work of Alan Turing. In the book and film, set during WWII, Turing is trying to crack the German Enigma code and in the course of doing that, saves the Allies from the Nazis, and sort of invents the computer and artificial intelligence.



The Spencer Tracy character in the 1959 film was also trying to create a digital way of solving problems. They describe him as being an "efficiency expert" which was a new and big concern in the 1950s.



Today, predictive analytics has become a big topic in educational technology and I have written a number of posts about its use in education. It is a way of using statistics, modeling and data mining to analyze current and historical facts in order to make predictions about future events. An example of one of the desired educational uses is to monitor at-risk students and allow interventions at the proper times.



 



Data analytics in higher education is still in its early years and the terms have changed over the past few years as the use of the term "big data" has replaced "data mining" in popular conversations. Where I was once reading articles about using "descriptive analytics" - the analysis of historical data to understand what has happened in the past - now I'm more likely to find articles on "predictive analytics" - using historical data to develop models for helping to predict the future.



Prescriptive analytics takes those predictions and goes to the next step of prescribing recommendations or actions to influence what happens in the future.



Confused? As an example, using big data and descriptive analytics about students and any particular student, we might predict the student's performance and problems in the current semester and then using a prescriptive analytics-driven learning management system we could recommend additional material, resources online or even notify on-campus people and departments to interact with the student early on.



Prescriptive analytics seems best-suited for educational problems like student retention, enrollment management, prospect analysis, improving learning outcomes and curricular planning. These are all problems that can be addressed with data analytics because there is adequate high-quality data to analyze those problem.



Did you read Moneyball or see the film version of Michael Lewis' popular book? It can be viewed as the story of the power of predictive analytics as he describes baseball's Oakland A’s team manager working with the lowest team budget in Major League Baseball and using predictive analytics techniques to turn around his team’s performance.



What schools need to do is similar to the "business rules" that companies formulate with input from various stakeholders in the organization.



Sometimes the data may produce results that are open to interpretation and campus experts need to be involved. In an article, "Prescriptive Analytics for Student Success," it is pointed out that student data now includes that generated by mobile device usage, campus cards, social media and sensor technologies. It presents an interesting case of an on-campus student who is not using dining services as often as before. Does that mean the student isn't as active socially on campus? Is she depressed? Can we add to the data class attendance or even clicker use? Is she more likely to drop out?



These "alternative data sources" are still emerging and may cross over into FERPA and privacy concerns about what is permissible in data collection. 



Learning analytics is another term used and seems to apply more to using learner-produced data and analysis models to discover information and social connections for predicting and advising people's learning. This area sounds like it would appeal more to teachers than the earlier examples, but in some ways there is a lot of crossover in studying individual learners. The data might allow the learner to reflect on their achievements and patterns of behavior in relation to their peers. It can warn them of topics or courses requiring extra support and attention. It can help teachers and support staff plan interventions with individuals and groups.



For departments and institutions, it can help improve current courses, help develop new offerings and develop marketing and recruitment strategies.



Predictive analytics is a big field and one that seems to strike fear into teachers much like those computers in Desk Set did more than 50 years ago. It encompasses statistical techniques from modeling, machine learning, and data mining. Do we trust the predictions about future, or otherwise unknown, events?



This is being done in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, healthcare, pharmaceuticals and other fields, but many educators (and I confess to still being one) are hesitant about moving business models into education.



You may be able to accurately do credit scoring (as in the FICO score) in order to rank-order individuals by their likelihood of making future credit payments on time, but can we really predict how a student will do next semester in Biology 102?



Nate Silver gained a lot of attention with his blog and books, especially during the last presidential election, showing that predictive analytics pays big dividends in politics, sports and business.



Companies such as Google, Twitter and Netflix are hiring predictive analytics professionals to mine consumer behavior and they are in the front of the office rather than crunching numbers behind the scenes.



In higher education, student retention is such a big concern that colleges find success using data analytics, it will quickly find its way from administrative tasks and into classrooms.



 


What were the 20 most popular web sites every year since 1996?

A few other sites have posted this graph of the 20 most popular web sites every year since 1996, but I think it's interesting enough to pass along.


1996-2000  This section of the graph is the original dot-com boom era.




AOL really dominated at the start of the century. 




Looking at 2009-2013, the data (from comScore) shows the top five continues to be Google, Yahoo, Microsoft, Facebook and AOL, with Apple, LinkedIn and other mixed in with some of the "old media" companies.





see the full chart





Competency and Mastery: Part 2 - How to Measure It

measuring tapeYesterday, I was writing about differentiating mastery and competency in the light of movements such as competency-based education and degree programs.

The Mozilla Open Badge project and other initiatives have tried to standardize the use of badges for documenting learning. I like the idea but I don't see that badges have made any serious entry into educational institutions.

Badges have been used to mark what a person knows or what they can do. Proponents say using them is more student-centered and more about real student learning. It's certainly more real than using seat time and time on task as a measurement. Because a student has completed 9 credits hours proves nothing, and more often we hear that employers also question that getting an "A" grade for those 9 credits also doesn't prove any mastery or competency. Enter competency-based or evidence-based approaches to learning.

I still think about the merit badges I earned in scouting when this topic comes up. The badges were extrinsic motivators and they worked for me and most of my fellow scouts. You wanted to get them. I liked the ceremonial awarding of them at meetings and the recognition. My mom and my "den mother" were pretty conscientious about signing off that I had completed the requirements to earn them. But much of the work I had to do was on the "honor system" and I'm sure I cut corners on some things and got away with it.

If I earned a badge for "climbing" (as in rock and mountains), would you say I was competent at the sport? Would you say I had mastered it? I don't think I'd be comfortable saying either one of those things. I had learned about it and I had done some actual activities involved with it. I had not mastered it and I'm not sure a real climber would say I was competent enough to do it on my own or very seriously.

As Bernard Bull and others have pointed out, this same critique can be leveled at letter grades. Do both make school about "earning instead of learning?"

We also associate badges with video games and in the gamification of learning, they play an important role. In the pure gaming environment, earning badges, points, power pills, or whatever tokens are given sometimes does take precedence over learning. Then again, some games aren't much interested in learning.

It's better to think of badges as markers, milestones of progress rather than as a goal. 

The Mozilla project and others have tried to give more trust in badges as credentials and educational currency. Education has always valued tests, scores and credits as evidence of learning even though we have been arguing about it for hundreds of years and continue to do so.

If the organization awarding the badge is credible, then the real concern is what evidence is being used to determine the completion. As with the goals and objectives we now hold as important in schools, some things are more easily measured.

Want to earn the "Miler" badge? Then run a mile in under 5 minutes and have it verified by the teacher or coach. Want to earn the "Team Player" or "Leadership" badges? Then... play on a team... be the captain...  Hmmm. Those are tougher things to measure.

Students, teachers and schools have talked for a long time about trying to get away from a reliance on just grades, but grades persist. Portfolio assessment and other movements have made a dent in some instances, but the quantifiable test score still wins the day. That stopwatch on the mile runner is easily validated. Today there is more testing and data being used and more complaints about its use.

Learning Beyond Letter Grades was a course offered last year that examined why so many schools use and rely on letter grades. "Where did they come from? What do they tell us and fail to tell us about the learners? What is the relationship between letter grades, student learning, and assessment?" That's a lot to ask in a six-week course, but it comes from this desire many of us have to consider authentic and alternative assessments, peer assessment, self-assessment and badges.

Some badges set an expiration date, meaning the badge bearer will need to return for more training or provide updated evidence to keep the badge.  That's an idea from the world of professional development, licensing and credentials. If you earned a computer programming or phlebotomy badge in 2001, should it still be valid today? Perhaps not.

Perhaps the most difficult hurdle in launching a competency or mastery-based program might be how to assess/validate learning. We have been hitting that one back and forth for centuries.

Competency vs. Mastery: Part 1 - A Matter of Definitions?

laptop In 2014, I started seeing more articles about Competency-Based Education (CBE) as the new approach to higher education degrees. In 2013, I think that "mastery" might have been the buzzier word. Mastery got a big push last year from things like Khan Academy and founder Sal Khan's belief that mastery of a skill or concept before moving on was what was lacking in American education overall.

A simplified explanation of the difference, perhaps from the view of an employer, is measuring what they know (mastery) versus what they can do (competency).

Is competency the new mastery? I did some searching and turned up a piece called "Competency vs. Mastery" by John F. Ebersole (president of Excelsior College) on the Inside Higher Ed site that compares these two approaches to "validating" learning.

He suggests that “competency” could be akin to subject matter “mastery” and might be measured in traditional ways - examinations, projects, and other forms of assessment.

Ask that hypothetical woman-on-the-street if they would rather hire someone who had mastered a skill or was competent, I suspect mastery would win out. Of course, that person's ability to apply what they have mastered into practice might still be in question.

It may be semantics, but considering someone to be "competent" sounds to many people like "adequate." That article gives as an example those instructors we have experienced as students who had "complete command of their subjects, but who could not effectively present to their students. The mastery of content did not extend to their being competent as teachers."

What would you say a subject matter exam measures? Mastery? Might an undergraduate have mastered subject matter or skills but still not be competent in her chosen field?

Looking online at the available books on competency-based education and training, most of them are in healthcare and clinical supervision, which is also the programs discussed in the article. Does the CBE approach work with other disciplines?

Some interest in CBE comes from that often-heard idea that employers don't view new college graduates as ready to do the job. They expect to have to further train the new hire who has "mastered content, but [is] not demonstrating competencies."

Ebersole says that "To continue to use 'competency' when we mean 'mastery' may seem like a small thing. Yet, if we of the academy cannot be more precise in our use of language, we stand to further the distrust which many already have of us."


in part 2, I consider measuring mastery & competency