Jeffrey R. Young moderated a panel at the Reimagine Education conference that was a debate on the question, “Is the Classroom Dead?” There were two people making a case for the need for in-person gatherings of learners (the traditional classroom) and two arguing that the classroom has outlived its usefulness.
Young's own post about it had what might be a more accurate title question: What If We Stopped Calling Them Classrooms?
What do you picture when you think of the word classroom? A teacher in front of a group of students in a room that probably has rows of seats/desks. How does that model match trends in education today?
NJIT once had the trademark on the term "virtual classroom" and that was often used in the early days of online education to describe what we were trying to do. The instructional design of the time followed the term and tried, as much as possible, to reproduce the classroom online. That meant 90 minute lectures, sometimes recorded in a physical classroom live before other students (lecture capture is still being done today). It meant having ways to "raise your hand" and respond to questions or ask questions. It meant tests and quizzes and ways to submit work and a gradebook.
But is that the way we should design online learning? Is it even the way we should be teaching in a physical classroom today?
One thing we seem to have gleaned from MOOCs is that the optimal length of video lectures is 5-7 minutes. Has that been adapted to most face-to-face or even online courses? No. Should we be teaching in a classroom in chunks of 7 minute lessons?
Not calling a classroom a classroom solves nothing. Calling a school library a media center doesn't mean much if the physical space and its contents remain a library.
Yes, this post is more questions than answers, but perhaps questioning what the classroom is in 2017 is where we are right now.
I was curious to look at this study that analyzes nontraditional students' perceptions of online course quality and what they "value." The authors categorized students into three groups: traditional, moderately nontraditional, and highly nontraditional. Those distinctions are what initially got my attention.
I hear more and more about "nontraditional students" to the degree that I'm beginning to believe that like any minority they will become a majority and then be what is "traditional." For years, I have been reading and experiencing the decline of traditional students who graduated high school and went immediately on to attend a four-year college, full time and with the majority living on campus. They are an endangered species - and that scares many schools.
In this study, they say that "There is no precise definition for nontraditional students in higher education, though there are several characteristics that are commonly used to identify individuals labeled as nontraditional. A study by the National Center for Educational Statistics (NCES, 2002), identified nontraditional students as individuals who meet at least one of the following qualifiers: delays enrollment, attends part-time for at least part of the academic year, works full-time, is considered financially independent in relation to financial aid eligibility, has dependents other than a spouse, is a single parent, or does not have a high school diploma. Horn (1996) characterized the “nontraditional-ness” of students on a continuum depending on how many of these criteria individuals meet. In this study, respondents’ age, dependents, employment status and student status are used to define nontraditional students."
Two-year schools as a degree and job path, part-time students working full-time, older students returning to education and other "non-traditional" sources of learning (for-profits, training centers, alternative degrees, MOOCs) have all made many students "non-traditional." Some people have talked about the increasing number of "non-students" who are utilizing online training without any intention of getting credits or a certificate or degree.
The things the non-traditional students in the study value are not surprising: clear instructions on how to get started, clear assessment criteria, and access to technical support if something goes wrong. How different from the traditional students would that be?
The conclusions of the study suggest that "nontraditional students differ from more traditional students in their perceptions of quality in online courses," but they also say that "All students place great importance on having clear statements and guidelines related to how their work will be assessed." The overlap is that students always want to know "what they need to do in order to get an A."
One belief of the authors that I have observed for my 16 years of teaching online is that non-traditional students (no matter how we define them) have "multiple responsibilities and they need to ensure that the time spent on their coursework is beneficial and productive." As teachers, we would hope that this is true of all our students, even the very traditional ones who may have fewer concerns and responsibilities that are non-academic.
As a teacher or instructional designer, this reinforces the ideas that they need courses to be: well-designed, consistently presented, easily navigable, appropriately aligned, with clearly stated expectations, and information about how to and who to contact when they encounter challenges to learning. In that sense, we are all non-traditional.
Most teachers have stated learning objectives for their courses. They describe what we plan to teach and how we plan to assess students.
You may have read this summer about a case involving whether a professor can be required to write those down on a syllabus. A professor at the College of Charleston brought a lawsuit against the school that claimed that he was losing his job for refusing to include learning outcomes (the same as objectives?) in his syllabus.
The answer to the question of whether a course can not have learning objectives is a pretty resounding No. Of course, I'm sure students could point out some truly dreadful courses that did not have clear objectives or outcomes. Whether they are stated explicitly to students, probably in the syllabus, is the real question in that case. My answer to this second question of whether or not a course can not have clearly stated objectives is a resounding Yes.
Faculty need to consciously establish their goals and objectives in designing the course, but they also need to communicate those to students.
I would say that kind of information information should be available to a student before she even signs up for the course, perhaps in a course catalog or online page about the course. The objectives should also be explained in greater detail in the syllabus and in the course itself.
That is an instructional design task. I was very surprised how difficult it was to get faculty that I worked with on course design to understand the difference between a goal and an objective. We can get bogged down and confused in talking about goals, objectives and outcomes. If faculty are confused, certainly the students will be confused as well.
In a bit of an oversimplification, a goal is an overarching principle that guides decision making, while objectives are specific, measurable steps that can be taken to meet the goal.
You can further muddy this academic water by adding similar, but not interchangeable, terms such as competencies and outcomes. In this document I found online and used in some version for faculty workshops, it says: "From an educational standpoint, competencies can be regarded as the logical building blocks upon which assessments of professional development are based. When competencies are identified, a program can effectively determine the learning objectives that should guide the learners’ progress toward their professional goals. Tying these two together will also help identify what needs to be assessed for verification of the program’s quality in its effectiveness towards forming competent learners...In short, objectives say what we want the learners to know and competencies say how we can be certain they know it."
Whatever terminology you use, teachers need to know the larger goals in order to design the ways they will be presented and how they will be measured. Students need to knew as early as possible those last two parts.
Blackboard's data science people have done a study on the data from all that student clicking in their learning management system and aggregated data from 70,000 courses at 927 colleges and universities in North America during the spring 2016 semester. That's big data.
But the results (reported this week on their blog) are not so surprising. In fact, their own blog post title on the results - "How successful students use LMS tools – confirming our hunches" - implies that we shouldn't be very surprised.
Let us look at the four most important LMS tools they found in predicting student grades. As someone who has taught online for fifteen years, it makes sense to me that the four tools are the ones most frequently used.
On top was the gradebook - not the actual grades, but that students who frequently check their grades throughout the semester tend to get better marks than do those who look less often. "The most successful students are those who access MyGrades most frequently; students doing poorly do not access their grades. Students who never access their grades are more likely to fail than students who access them at least once. There is a direct relationship at every quartile of use – and at the risk of spoiling results for the other tools, this is the only tool for which this direct trend exists. It appears that students in the middle range of grades aren’t impacted by their use of the tool."
Next was their use of course content. That makes sense. Actually, I would have thought it would be the number one predictor of success. Their data science group reports "An interesting result was that after the median, additional access is related to a decline in student grade; students spending more than the average amount of time actually have less likelihood of achieving a higher grade!" That's not so surprising. Students spending more time (slow or distracted readers; ones who skimmed and need to repeatedly return to material etc.) are probably having problems, rather than being more though. The student who spends an hour on a problem that should take 15 minutes is not showing grit.
This is followed by assessments (tests etc.) and assignments. "If students don’t complete quizzes or submit assignments for a course, they have lower grades than those who do so. This was not a surprising finding. What was surprising to me is that this wasn’t the strongest predictor of a student’s grade." Why is that surprising? Because it is what we use to evaluate and give those grades.Digging a bit deeper in that data, Blackboard concludes that time is a factor as a "...strong decline in grade for students who spend more than the average amount of time taking assessments. This is an intuitive result. Students who have mastered course material can quickly answer questions; those who ponder over questions are more likely to be students who are struggling with the material. The relationship is stronger in assessments than assignments because assessments measure all time spent in the assessment, whereas assignments doesn’t measure the offline time spent creating the material that is submitted. Regardless, this trend of average time spent as the most frequent behavior of successful students is consistent across both tools, and is a markedly different relationship than is found in other tools."
The fifth tool was discussion. I have personally found discussions to be very revealing of a student's engagement in the course. I also find that level of engagement/participation correlated to final grades, but that may be because I include discussions in the final grade. I know lots of instructors who do not require it or don't grade it or give it a small weight in the final grade.
An article on The Chronicle of Higher Education website is a bit unsure of all this big data's value. "But it’s hard to know what to make of the click patterns. Take the finding about grade-checking: Is it an existential victory for grade-grubbers, proving that obsessing over grades leads to high marks? Or does it simply confirm the common-sense notion that the best students are the most savvy at using things like course-management systems?"
And John Whitmer, director of learning analytics and research at Blackboard, says "I’m not saying anything that implies causality."
Should we be looking at the data from learning-management systems with an eye to increasing student engagement? Of course. Learning science is a new term and field and I don't think we are so far past the stage of collecting data that we have a clear learning path or solid course adjustments to recommend.
Measuring clicks on links in an LMS can easily be deceiving, as can measuring the time spent on a page or in the course. If you are brand new to the LMS, you might click twice as much as an experienced user. Spending 10 minutes on a page versus 5 minutes doesn't mean much either since we don't know if the time spent reading, rereading or going out to get a coffee.
It's a start, and I'm sure more will come from Blackboard, Canvas, MOOC providers (who will have even greater numbers, though in a very different setting) and others.