Can You Still Require a Textbook for Your Course?

Articles on college students' spending on textbooks continues to be an issue, as I has been for decades. Lately, it seems that the spending on books and course materials is declining, but not because textbooks are cheaper.

As I have posted and presented in years past, more and more students simply skip buying required course materials. Do students have alternatives? There are usually used books to buy - though the constant "new editions" discourage that and buying from college bookstores or publushers is often not much of a savings. 

The scarier alternatives for faculty, colleges and students is that students simply try to get through the course without the materials. Fake it, beg, borrow or steal it. Some students report that they have to take fewer classes (especially true with part-time students). Some students say that the cost of materials is a factor in choosing classes (electives) or sections of a course (required).

Can you still require a textbook for a course? Of course. Can you expect that all the students will own a copy? No.

A new survey of undergraduates on 23 campuses by the National Association of College Stores, found that students spent an average of $563 on course materials during the 2014-15 academic year, compared with $638 the year before.

Some of that decrease may be due to the increasing use of textbook-rental programs and the use of open textbooks. But of those students who did not buy textbooks, the report noted, a greater percentage than in the past said it was because "they believed them to be unnecessary."

I would not recommend that students buy a textbook before the semester and wait to see how critical the readings are to course success.

I have found for my own students (and other surveys seem to agree) that as digital as my students might be, they still prefer print if cost isn't a factor. 

I do not see a significant increase in the use of free and inexpensive open textbooks, and that is unfortunate. That is out of the hands of students and is often a direct result of teachers not being aware of them.

In a time when college bookstores are more likely to be called just the "campus store" because more sales come from clothing, snacks and drinks than books, you would expect the open textbook movement to be gaining strength.

I did a number of presentations to faculty in past years about finding open textbooks to use in their courses. (An older guide I did at PCCC is still online and relevant.) You can start by just searching through some of the most used sources (below), but, yes, it does take some work.

I haven't assigned a textbook in any of my graduate classes in the past 5 years with all readings coming from free online sources including open textbooks. 










Big-Data Scientists Face Ethical Challenges After Facebook Study

"Big-Data Scientists Face Ethical Challenges After Facebook Study" By Paul Voosen from http://chronicle.com/article/Big-Data-Scientists-Face/150871/

"Jeffrey Hancock, a Cornell U. professor who teamed up with Facebook on a controversial study of emotion online, says the experience has led him to think about how to continue such collaborations “in ways that users feel protected, that academics feel protected, and industry feels protected.”
Last summer the technologists discovered how unaware everyone else was of this new world.

After Facebook, in collaboration with two academics, published a study showing how positive or negative language spreads among its users, a viral storm erupted. Facebook "controls emotions," headlines yelled. Jeffrey T. Hancock, a Cornell University professor of communications and information science who collaborated with Facebook, drew harsh scrutiny. The study was the most shared scientific article of the year on social media. Some critics called for a government investigation.

Much of the heat was fed by hype, mistakes, and underreporting. But the experiment also revealed problems for computational social science that remain unresolved. Several months after the study’s publication, Mr. Hancock broke a media silence and told The New York Times that he would like to help the scientific world address those problems"

 

That Facebook Research and Academia


I have waited a few weeks for the Internet to react to the Facebook research that was revealed and caused a big buzz (again) about privacy. The short summary: Facebook manipulated the news feeds of thousands of its users, without their knowing consent, in order to do some research. They wanted to know if they could have an effect on people’s behavior in the network.


Oh wait - that was back in 2010 when they were looking at U.S. voting patterns in the midterm elections. That story was told in 2012 by Nature magazine. Not much of a public reaction. No real outcry about questionable ethics.

But this latest study that Facebook conducted was co-designed by researchers at Cornell University. This research examined how positive or negative language spreads in social networks. If you see more negative comments and news, do you become more negative yourself in your posts?

This time there were two negative reactions by the public and the press. First, in this year following the NSA and Snowden revelations, there was a very vocal outcry of criticism about whether
Internet users should be informed about experiments that test human behavior. (Facebook likes to point out that users did "allow" the study by agreeing to the terms of service.)

The second concern was that a university played a role in the research design.

What were the results of the research? Users who saw fewer positive posts were less likely to post something positive, and vice versa, but the effect was small and faded as days passed. That sounds like common sense, right? Actually, existing research had seemed to indicate that seeing a number of happy, positive news feed items from friends, they felt a negativity about their own lives.

Researchers in academia are used to having research approved first by an Institutional Review Board. Did that happen at Cornell?  The data scientist at Facebook conducted the actual research. He collaborated with a Cornell researcher and his former postdoc on the design and subsequent analysis. But, since the Cornell researchers did not participate in the data collection, the university’s IRB concluded that the study did not require oversight as it would usually require with human-subjects research. 

The research study was published in early June in the respected journal, Proceedings of the National Academy of Sciences.

The revelations about the NSA snooping had a split reaction. Some people saw Snowden as a hero whistle-blower alerting us to wrongdoing and wanted changes to be made in what was allowed. Others saw him as dangerous because he revealed a kind of research that the government needs to do to protect us.

The Facebook/Cornell research certainly doesn't come anywhere near the complexity or seriousness of the NSA case. Nevertheless, some people want to see this kind of research controlled or stopped and our online privacy protected better. A smaller number think that this is part of the price of using the Net and social media.

My conclusion? This kind of social research will continue. BUT - it will be done (with your approval, even if you don;t read the fine print before clicking that AGREE button), but it is unlikely to be public. It will be kept private and will not be published. And colleges will be much more careful about making research collaborations with corporations - especially those that operate online.





33 Ethicists Defend Facebook’s Controversial Mood Study

A group of bioethicists wrote in a column that Facebook’s controversial study of mood manipulation was not unethical, and harsh criticism of it risks putting a chill on future research. The article was written by six ethicists, joined by 27 others.


Are We Less Adrift Academically?

adrift


It was 3 years ago that I posted about Richard Arum's study of student learning in higher education over a two year period to examine how institutional settings, student backgrounds, and individual academic programs influence how much students learn on campus.  He was measuring "higher order thinking skills" such as problem-solving, critical thinking, analytical reasoning and communication skills.


When he published Academically Adrift: Limited Learning on College Campuses
in 2011, it caused a lot of discussion about higher education and the internal workings of colleges.

One of his overall findings was that that many
students showed no meaningful gains on key measures of learning during
their college years.

Inside Higher Ed posted about two more recent reports that challenge Academically Adrift' underlying conclusions about students' critical thinking gains in
college, and especially the extent to which others have seized on those
findings to suggest that too little learning takes place in college. The
studies by the Council for Aid to Education show that students taking
the Collegiate Learning Assessment made an average gain of 0.73 of a
standard deviation in their critical thinking scores, significantly more
than that found by the authors of Academically Adrift.


So, college does matter?

Richard Arum (NYU) made sure to note methodological differences in how the two sets of data were drawn. (For example, the newer study does not follow the same group of
students over time.)  He also says that his study (done with Josipa Roksa (UVA) never questioned the contribution that college makes to student learning, although that has been the spin given to their research by the book's champions.


Three years ago, Academically Adrift was noteworthy because it used some new assessment tools that specifically measure the "added value" that colleges impart to their students' learning, by allowing for the comparison of
the performance of students over time.


The study was criticized for relying so heavily on the Collegiate Learning Assessment as its way to suggest whether or not students have learned. The reports are full of assessment talk: average gains of 0.73 of a
standard deviation over several test administrations, or maybe it is 0.18, or less
than 20 percent of a standard deviation tracking it over two years (rather than four), or a 
gain of 0.47 standard deviation, still significantly smaller than, but closer to, the CAE finding, or maybe because they followed the same
cohort of students throughout their collegiate careers cross-sectional rather than longitudinal comparison makes it significant, because at most institutions significant numbers of the
entering freshmen will have dropped out and hundreds of
research studies over the years have clearly demonstrated that dropouts
are
not comparable to degree completers.

My head is spinning.

But are we less adrift than we were three years ago? I see no major movement or changes that would indicate that things are any different. But then, we can't even agree on what they were three years ago.