Can a Course Not Have Learning Objectives?


blank sign Most teachers have stated learning objectives for their courses. They describe what we plan to teach and how we plan to assess students.

You may have read this summer about a case involving whether a professor can be required to write those down on a syllabus. A professor at the College of Charleston brought a lawsuit against the school that claimed that he was losing his job for refusing to include learning outcomes (the same as objectives?) in his syllabus.

The answer to the question of whether a course can not have learning objectives is a pretty resounding No. Of course, I'm sure students could point out some truly dreadful courses that did not have clear objectives or outcomes. Whether they are stated explicitly to students, probably in the syllabus, is the real question in that case. My answer to this second question of whether or not a course can not have clearly stated objectives is a resounding Yes.

Faculty need to consciously establish their goals and objectives in designing the course, but they also need to communicate those to students.

I would say that kind of information information should be available to a student before she even signs up for the course, perhaps in a course catalog or online page about the course. The objectives should also be explained in greater detail in the syllabus and in the course itself.

That is an instructional design task. I was very surprised how difficult it was to get faculty that I worked with on course design to understand the difference between a goal and an objective. We can get bogged down and confused in talking about goals, objectives and outcomes. If faculty are confused, certainly the students will be confused as well.

In a bit of an oversimplification, a goal is an overarching principle that guides decision making, while objectives are specific, measurable steps that can be taken to meet the goal.

You can further muddy this academic water by adding similar, but not interchangeable, terms such as competencies and outcomes. In this document I found online and used in some version for faculty workshops, it says: "From an educational standpoint, competencies can be regarded as the logical building blocks upon which assessments of professional development are based. When competencies are identified, a program can effectively determine the learning objectives that should guide the learners’ progress toward their professional goals. Tying these two together will also help identify what needs to be assessed for verification of the program’s quality in its effectiveness towards forming competent learners...In short, objectives say what we want the learners to know and competencies say how we can be certain they know it."

Whatever terminology you use, teachers need to know the larger goals in order to design the ways they will be presented and how they will be measured. Students need to knew as early as possible those last two parts.



 


Clicking Links in an Online Course and Student Engagement


tool use pie chart


Overall LMS tool use via blackboard.com



Blackboard's data science people have done a study on the data from all that student clicking in their learning management system and aggregated data from 70,000 courses at 927 colleges and universities in North America during the spring 2016 semester. That's big data.

But the results (reported this week on their blog) are not so surprising. In fact, their own blog post title on the results - "How successful students use LMS tools – confirming our hunches" - implies that we shouldn't be very surprised.

Let us look at the four most important LMS tools they found in predicting student grades. As someone who has taught online for fifteen years, it makes sense to me that the four tools are the ones most frequently used.

On top was the gradebook - not the actual grades, but that students who frequently check their grades throughout the semester tend to get better marks than do those who look less often. "The most successful students are those who access MyGrades most frequently; students doing poorly do not access their grades. Students who never access their grades are more likely to fail than students who access them at least once. There is a direct relationship at every quartile of use – and at the risk of spoiling results for the other tools, this is the only tool for which this direct trend exists. It appears that students in the middle range of grades aren’t impacted by their use of the tool."

Next was their use of course content. That makes sense. Actually, I would have thought it would be the number one predictor of success. Their data science group reports "An interesting result was that after the median, additional access is related to a decline in student grade; students spending more than the average amount of time actually have less likelihood of achieving a higher grade!" That's not so surprising. Students spending more time (slow or distracted readers; ones who skimmed and need to repeatedly return to material etc.) are probably having problems, rather than being more though. The student who spends an hour on a problem that should take 15 minutes is not showing grit.

This is followed by assessments (tests etc.) and assignments. "If students don’t complete quizzes or submit assignments for a course, they have lower grades than those who do so. This was not a surprising finding. What was surprising to me is that this wasn’t the strongest predictor of a student’s grade." Why is that surprising? Because it is what we use to evaluate and give those grades.Digging a bit deeper in that data, Blackboard concludes that time is a factor as a "...strong decline in grade for students who spend more than the average amount of time taking assessments. This is an intuitive result. Students who have mastered course material can quickly answer questions; those who ponder over questions are more likely to be students who are struggling with the material. The relationship is stronger in assessments than assignments because assessments measure all time spent in the assessment, whereas assignments doesn’t measure the offline time spent creating the material that is submitted. Regardless, this trend of average time spent as the most frequent behavior of successful students is consistent across both tools, and is a markedly different relationship than is found in other tools."

The fifth tool was discussion. I have personally found discussions to be very revealing of a student's engagement in the course. I also find that level of engagement/participation correlated to final grades, but that may be because I include discussions in the final grade. I know lots of instructors who do not require it or don't grade it or give it a small weight in the final grade.

An article on The Chronicle of Higher Education website is a bit unsure of all this big data's value. "But it’s hard to know what to make of the click patterns. Take the finding about grade-checking: Is it an existential victory for grade-grubbers, proving that obsessing over grades leads to high marks? Or does it simply confirm the common-sense notion that the best students are the most savvy at using things like course-management systems?"

And John Whitmer, director of learning analytics and research at Blackboard, says "I’m not saying anything that implies causality."

Should we be looking at the data from learning-management systems with an eye to increasing student engagement? Of course. Learning science is a new term and field and I don't think we are so far past the stage of collecting data that we have a clear learning path or solid course adjustments to recommend.

Measuring clicks on links in an LMS can easily be deceiving, as can measuring the time spent on a page or in the course. If you are brand new to the LMS, you might click twice as much as an experienced user. Spending 10 minutes on a page versus 5 minutes doesn't mean much either since we don't know if the time spent reading, rereading or going out to get a coffee.

It's a start, and I'm sure more will come from Blackboard, Canvas, MOOC providers (who will have even greater numbers, though in a very different setting) and others.


Blending Competency-Based Education and Adaptive Learning



Last month, I attended the Fusion 2016 conference in Washington DC sponsored by the learning management system Brightspace (D2L or Desire2Learn). Two of the topics that ran across many of the presentations were competency-based education (CBE), programs and degrees, and adaptive learning. Both of these topics have been ones I have been written about and have been interested in the past two years.

CBE is a movement that Brightspace has put some focus on in its design. Success in CBE is assessed by the knowledge gained, not the time spent, so when a student masters a concept, they move on to the next. Until they master it, they receive specific guidance to help them. The plusses of CBE are usually given as faster completions, individualized pacing, credit for prior knowledge more immediate feedback and potential cost savings for students.

Adaptive learning is an educational method which uses technology (computers, LMS etc.) as interactive teaching devices. More importantly is using and coordinating any human (faculty, tutors etc.) and mediated resources to the unique needs of each learner. For me, adaptive learning is a 21st century take on what we called individualized learning several decades ago.

I believe that both approaches have value, but after hearing a number of presentations on them last month, I wrote in my notes "Why not blend adaptive learning with CBE?"

Dragan Gasevic has said that that what we need to do is create adaptive learners rather than adaptive learning. The idea that software should develop those desired attributes of learners that we want requires shifting education from the acquisition of knowledge. Gasevic and George Siemens consider knowledge acquisition to be "the central underpinning of most adaptive learning software today."  They would like to see more focus on the development of learner states of being, including affect, emotion, self-regulation and goal setting.  

I am not unique in looking to a blending of CBE and adaptive learning. Unfortunately, for a time, individualized/personalized learning, competency-based education, and blended learning were not well-defined by educators and were sometimes even used synonymously. iNACOL and its project, CompetencyWorks, have looking at some of the misconceptions of CBE that might actually undermine equity.taken leadership in helping the field understand these concepts as different and relational to build knowledge in communicating these topics.

One misconception noted is that the idea of flexible pacing is misused to be synonymous with competency education. Allowing self-pacing flexibility and software for improved data feedback loops is a positive step, but it is not necessarily mean you have a personalized learning environment or a competency-based progression.

In the issue of equity in competency education, one concern is that variation in pacing may mean a percentage of students get left behind. It is not that gaps for students who lack knowledge and skills already exist, but a more time-based structure means these gaps only grow over time. Competency education requires daily focus on student progress, supports to stay on pace and acting to ensure they demonstrate mastery.


The Augmented Reality of Pokémon Go

Go
People have been searching for creatures and running down their phone batteries this month since Pokémon Go was released.
Is there any connection of this technology to education, Ken? Let's see.

First off, Pokémon Go is a smartphone game that uses your phone’s GPS and clock to detect where and when you are in the game and make Pokémon creatures appear around you on the screen. The objective is to go and catch them.

This combination of a game and the real world interacting is known as augmented reality (AR). AR is often confused with VR - virtual reality. VR creates a totally artificial environment, while augmented reality uses the existing environment and overlays new information on top of it.

The term augmented reality goes back to 1990 and a Boeing researcher, Thomas Caudell, who used it to describe the use of head-mounted displays by electricians assembling complicated wiring harnesses.

A commercial applications of AR technology that most people have seen is the yellow "first down" line that we see on televised football games which, of course, is not on the actual field.

Google Glass and the displays called "heads-up" in car windshields are another consumer AR application. there are many more uses of the technology in industries like healthcare, public safety, gas and oil, tourism and marketing.

Back to the game... My son played the card game and handheld video versions 20 years ago, so I had a bit of Pokémon education. I read that it is based on the hobby of bug catching which is apparently popular in Japan, where the games originated. Like bug catching or birding, the goal is to capture actual bugs or virtual birds and Pokémon creatures and add them to your life list. The first generation of Pokémon games began with 151 creatures and has expanded to 700+, but so far only the original 151 are available in the Pokémon Go app.

I have seen a number of news reports about people doing silly, distracted things while playing the game, along with more sinister tales of people being lured by someone via a creature or riding a bike or driving while playing. (The app has a feature to try to stop you using from it while moving quickly, as in a car.)

Thinking about educational applications for the game itself doesn't yield anything for me. Although it does require you to explore your real-world environment, the objective is frivolous. So, what we should consider is the use of VR in education beyond the game, while appreciating that the gaming aspect of the app is what drives its appeal and should be used as a motivator for more educational uses.
AR
The easiest use of VR in college classrooms is to make use of the apps already out there in industries. Students in an engineering major should certainly be comfortable with understanding and using VR from their field. In the illustration above, software (metaio Engineer) allows someone to see an overlay visualization of future facilities within the current environment. Another application can be having work and maintenance instructions directly appear on a component when it is viewed.
Augmented reality can be a virtual world, even a MMO game. The past year we have heard more about virtual reality and VR headsets and goggles (like Oculus Rift) which are more immersive, but also more awkward to use.This immersiveness is an older concept and some readers may recall the use of the term "telepresence.” 

Telepresence referred to a set of technologies which allowed a person to feel as if they were present, or to to give the appearance of being present, or to have some impact at place other than their true location. Telerobotics does this, but more commonly it was the move from videotelephony to videoconferencing. Those applications have been around since the end of the last century and we have come a god way forward from traditional videoconferencing to doing it with hand-held mobile devices, enabling collaboration independent of location.

In education, we experimented with these applications and with the software for MMOs, mirror worlds, augmented reality, lifelogging, and products like Second Life. Pokémon Go is Second Life but now there is no avatar to represent us. We are in the game and the game is the world around us, augmented as needed. The world of the game is the world.