Measuring MOOCs

measuringMOOC enrollment surpassed 35 million in 2015 and though the blush is off the MOOC rose, they are clearly a new learning context for many people.

Are they transformative? I believe so, in the ways that they have created new platforms for online learning, reached new audiences and started conversations about alternative forms of learning and even alternatives within traditional education around credits and degrees.

Many of the current offerings labeled as MOOC are more moOC - that is, the numbers are less massive and the courses and content are not really "open" in the original intent and definition of that word.

Certificate-granting MOOCs and ones for college credit and professional development are what I see as the current trends of interest.

But the topic of quality in MOOCs, and online learning in general, has never gone away. Two reports recently examine that consideration.

The Babson annual “Online Report Card” is in its thirteenth (and I read, its final) year. The report's introduction caught my attention because it says that we are at a stage when “distance education is clearly becoming mainstream” and the divide between "online learning" and simply learning is less evident. The report doesn't spend much time on quality and seems to put MOOCs in the same category as other online learning.

The report titled  “In search of quality: Using Quality Matters to analyze the quality of Massive, Open, Online Courses (MOOCs)”  applied the Quality Matters™ higher education rubric (not the Continuing and Professional Development version) to six MOOCs, offered by three providers, Coursera, edX and Udacity.

Was that a fair test?

Critics of MOOCs will point to the result that all six MOOCs failed to meet QM’s passing grade of 85%. The QM rubric standards are grouped into eight dimensions and the MOOCs performed especially poorly at learner interaction and engagement, and learner support.

When my university first started to have students evaluate their online courses at the end of the last century, it used the same criteria and survey that was used for regular classes. That made some sense at first because they wanted to measure one against the other. Online offerings always did well in "the use of technology and media" category, but not very well in some of the face-to-face items such as lectures and teacher engagement.  After a few years, it was clear that a new survey made specifically for online courses was needed. But our online student survey would not be fair to use for a MOOC, especially one that is truly Massive and Open. A well designed course with many thousands of students, using OER, possibly no textbook, and taken for no credit or fees is just not going to be able to be measured against a good online course with a small number of students motivated by tuition, a grade, credits and a degree to be completed.

Efforts the past few years to evaluate MOOCs and establish standards of quality are important, but we have quite a ways to go.


Trackback specific URI for this entry


Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry