Jeffrey R. Young moderated a panel at the Reimagine Education conference that was a debate on the question, “Is the Classroom Dead?” There were two people making a case for the need for in-person gatherings of learners (the traditional classroom) and two arguing that the classroom has outlived its usefulness.
Young's own post about it had what might be a more accurate title question: What If We Stopped Calling Them Classrooms?
What do you picture when you think of the word classroom? A teacher in front of a group of students in a room that probably has rows of seats/desks. How does that model match trends in education today?
NJIT once had the trademark on the term "virtual classroom" and that was often used in the early days of online education to describe what we were trying to do. The instructional design of the time followed the term and tried, as much as possible, to reproduce the classroom online. That meant 90 minute lectures, sometimes recorded in a physical classroom live before other students (lecture capture is still being done today). It meant having ways to "raise your hand" and respond to questions or ask questions. It meant tests and quizzes and ways to submit work and a gradebook.
But is that the way we should design online learning? Is it even the way we should be teaching in a physical classroom today?
One thing we seem to have gleaned from MOOCs is that the optimal length of video lectures is 5-7 minutes. Has that been adapted to most face-to-face or even online courses? No. Should we be teaching in a classroom in chunks of 7 minute lessons?
Not calling a classroom a classroom solves nothing. Calling a school library a media center doesn't mean much if the physical space and its contents remain a library.
Yes, this post is more questions than answers, but perhaps questioning what the classroom is in 2017 is where we are right now.
I was curious to look at this study that analyzes nontraditional students' perceptions of online course quality and what they "value." The authors categorized students into three groups: traditional, moderately nontraditional, and highly nontraditional. Those distinctions are what initially got my attention.
I hear more and more about "nontraditional students" to the degree that I'm beginning to believe that like any minority they will become a majority and then be what is "traditional." For years, I have been reading and experiencing the decline of traditional students who graduated high school and went immediately on to attend a four-year college, full time and with the majority living on campus. They are an endangered species - and that scares many schools.
In this study, they say that "There is no precise definition for nontraditional students in higher education, though there are several characteristics that are commonly used to identify individuals labeled as nontraditional. A study by the National Center for Educational Statistics (NCES, 2002), identified nontraditional students as individuals who meet at least one of the following qualifiers: delays enrollment, attends part-time for at least part of the academic year, works full-time, is considered financially independent in relation to financial aid eligibility, has dependents other than a spouse, is a single parent, or does not have a high school diploma. Horn (1996) characterized the “nontraditional-ness” of students on a continuum depending on how many of these criteria individuals meet. In this study, respondents’ age, dependents, employment status and student status are used to define nontraditional students."
Two-year schools as a degree and job path, part-time students working full-time, older students returning to education and other "non-traditional" sources of learning (for-profits, training centers, alternative degrees, MOOCs) have all made many students "non-traditional." Some people have talked about the increasing number of "non-students" who are utilizing online training without any intention of getting credits or a certificate or degree.
The things the non-traditional students in the study value are not surprising: clear instructions on how to get started, clear assessment criteria, and access to technical support if something goes wrong. How different from the traditional students would that be?
The conclusions of the study suggest that "nontraditional students differ from more traditional students in their perceptions of quality in online courses," but they also say that "All students place great importance on having clear statements and guidelines related to how their work will be assessed." The overlap is that students always want to know "what they need to do in order to get an A."
One belief of the authors that I have observed for my 16 years of teaching online is that non-traditional students (no matter how we define them) have "multiple responsibilities and they need to ensure that the time spent on their coursework is beneficial and productive." As teachers, we would hope that this is true of all our students, even the very traditional ones who may have fewer concerns and responsibilities that are non-academic.
As a teacher or instructional designer, this reinforces the ideas that they need courses to be: well-designed, consistently presented, easily navigable, appropriately aligned, with clearly stated expectations, and information about how to and who to contact when they encounter challenges to learning. In that sense, we are all non-traditional.
Money follows eyeballs. I saw that phrase on a slide in a conference presentation about marketing with social media.
Everyone wants your attention. Your children want your attention. Your spouse wants your attention. You want the attention of your students. Nothing new about that concept and there are plenty of ways to get someone's attention.
But it is a more recent way of thinking about attention to consider it as economics. I was listening to the audiobook of A Beautiful Mind recently. It's a book (and a good but highly romanticized film) about the mathematician John Nash. Nash received the Nobel Prize in Economics for his work on game theory as it was applied to economics. His ideas, presented in the 1950s, certainly must have seemed novel at the time, but 40 years later they seemed logical. That will probably be true of attention economics. There are already a good number of people writing about it.
Attention economics is an approach to the management of information that treats human attention as a scarce commodity. With attention as a commodity, you can apply economic theory to solve various information management problems.
Attention is a scarce commodity or resource because a person has only so much of it.
Not only in economics but in education and other areas that focused mental engagement that makes us attend to a particular item, leads to our decision on whether to act or not. Do we buy the item advertised? Do we do what mommy said to do?
We are deep into the Information Age and content is so abundant and immediately available, that attention has become a limiting factor. There are so many channels and shows on the many versions of "television" competing for our attention that you may just decide not to watch at all. Or you may to decide to "cut the cord" and disconnect from many of them to make the choices fewer.
Designers know that if it takes the user too long to locate something, you will lose their attention. On web pages, that attention lasts anywhere from a few seconds to less than a second. If they can't find what they were looking for, they will find it through another source.
The goal then becomes to design methods (filters, demographics, cookies, user testing etc.) to make the first content a viewer sees relevant. Google and Facebook want you to see ads that are relevant to YOU. That online vendor wants the products on that first page to be things you are most interested in buying. Everything - and everyone - wants to be appealing to everyone.
In attention-based advertising, we measure the number of "eyeballs" by which content is seen.
"You can't please everyone." Really? Why not?
In the history section of the entry on "Attention Economy" on Wikipedia, it lists Herbert A. Simon as possibly being the first person to articulate the concept of attention economics. Simon wrote: "...in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it" (Simon 1971, pp. 40–41).
Simon was talking about the idea of information overload as an economic concept and that has led to business strategists such as Thomas H. Davenport to use the term "attention economy" (Davenport & Beck 2001).
Where will this lead? On the outer edges are those who speculate that "attention transactions" will replace financial transactions as the focus of our economic system (Goldhaber 1997, Franck 1999).
Designers of websites, software, apps and any user interface already take into account attention, but information systems researchers have also adopted the idea. Will we see mechanism designs which build on the idea of creating property rights in attention?