Can Bloom's Taxonomy Teach Us Anything About AI?

spiral model
Image gettingsmart.com

 

When I was studying to be a secondary school teacher, Bloom’s Taxonomy often came up in my classes as a way to do lesson planning and a way to assess learners. Recently, there have been several revisions to its pyramid stack. An article on www.gettingsmart.com suggests a spiral might be better, particularly if you want to use it as a lens to view AI.

The author, Vriti Saraf, opines that the most important potential of AI isn’t to enhance human productivity, it’s to enhance and support human thinking, and that looking at AI’s capabilities through the lens of Bloom’s Taxonomy showcases the possible interplay of humans and machines.

It is an interesting idea. Take a look.

 

 

Telling Students to Use AI

grading

2023 was certainly a year for AI. In education, some teachers avoided it and some embraced it, perhaps reluctantly at first. Some educators have reacted, partially to AI that can write essays Some schools, some teachers, some school districts some colleges some departments have tried to ban it issues. Of course, that is impossible, just as it was impossible to ban the use of Wikipedia or going back to the previous century, the use of a word processor, or a calculator in a math class, or use the Internet to copy and paste information.

What happened when an entire class of college students were told to use ChatGPT to write their essays?

Chris Howell, an adjunct assistant professor of religious studies at Elon University, noticed more and more suspiciously chatbot-esque prose popping up in student papers. So rather than trying to police the tech, he embraced it. He assigned students to generate an essay entirely with ChatGPT and then critique it themselves.

When I first caught students attempting to use ChatGPT to write their essays, it felt like an inevitability. My initial reaction was frustration and irritation—not to mention gloom and doom about the slow collapse of higher education—and I suspect most educators feel the same way. But as I thought about how to respond, I realized there could be a teaching opportunity. Many of these essays used sources incorrectly, either quoting from books that did not exist or misrepresenting those that did. When students were starting to use ChatGPT, they seemed to have no idea that it could be wrong.

I decided to have each student in my religion studies class at Elon University use ChatGPT to generate an essay based on a prompt I gave them and then “grade” it. I had anticipated that many of the essays would have errors, but I did not expect that all of them would. Many students expressed shock and dismay upon learning the AI could fabricate bogus information, including page numbers for nonexistent books and articles. Some were confused, simultaneously awed and disappointed. Others expressed concern about the way overreliance on such technology could induce laziness or spur disinformation and fake news. Closer to the bone were fears that this technology could take people’s jobs. Students were alarmed that major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks.

The assignment satisfied my goal, which was to teach them that ChatGPT is neither a functional search engine nor an infallible writing tool.

Source  wired.com/story/dont-want-students-to-rely-on-chatgpt-have-them-use-it/

Detecting AI-Written Content

chatbotWhen chatGPT hit academia hard at the start of this year, there was much fear from teachers at all grade levels. I saw articles and posts saying it would be the end of writing. A Princeton University student built an app that helps detect whether a text was written by a human being or using an artificial intelligence tool like ChatGPT. Edward Tian was a senior computer science major. He has said that the algorithm behind his app, GPTZero, can "quickly and efficiently detect whether an essay is ChatGPT or human written."

GPTZero is at gptzero.me. I was able to attend an online demo of the app now that it has been released as a free and paid product, and also communicated with Tian.

Because ChatGPT has exploded in popularity, it has gotten interest from investors. The Wall Street Journal reported that parent company OpenAI could attract investments valuing it at $29 billion. But the app has also raised fears that students are using the tool to cheat on writing assignments.

GPTZero examines two variables in any piece of writing it examines. It looks at a text's "perplexity," which measures its randomness: Human-written texts tend to be more unpredictable than bot-produced work. It also examines "burstiness," which measures variance, or inconsistency, within a text because there is a lot of variance in human-generated writing.

Unlike other tools, such as Turnitin.com, the app does not tell you the source of the writing. That is because of the odd situation that writing produced by a chatbot isn't exactly from any particular source.

There are other tools to detect AI writing - see https://www.pcmag.com/how-to/how-to-detect-chatgpt-written-text

Large language models themselves can be trained to spot AI-generated writing if they were trained on two sets of text. One text would be AI and the other written by people, so theoretically you could teach the model to recognize and detect AI writing.

Report: AI and the Future of Teaching and learning

I see articles and posts about artificial intelligence every day. I have written here about it a lot in the past year. You cannot escape the topic of AI even if you are not involved in education, technology or computer science. It is simply part of the culture and the media today. I see articles about how AI is being used to translate ancient texts at a speed and accuracy that is simply not possible with humans. I also see articles about companies now creating AI software for warfare. The former is a definite plus, but the latter is a good example of why there is so much fear about AI - justifiably so, I believe.

Many educators seem to have had the initial reaction to the generative chatbots that became accessible to the public late last year and were being used by students to write essays and research papers. This spread through K-12 and into colleges and even into academic papers being written by faculty.

A chatbot powered by reams of data from the internet has passed exams at a U.S. law school after writing essays on topics ranging from constitutional law to taxation and torts. Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students, consisting of 95 multiple-choice questions and 12 essay questions. In a white paper titled "ChatGPT goes to law school," he and his coauthors reported that the bot scored a C+ overall.

ChatGPT, from the U.S. company OpenAI, got most of the initial attention in the early part of 2023. They received a massive injection of cash from Microsoft. In the second half of this year, we have seen many other AI chatbot players, including Microsoft and Google who incorporated it into their search engines. OpenAI predicted in 2022 that AI will lead to the "greatest tech transformation ever." I don't know if that will prove to be true, but it certainly isn't unreasonable from the view of 2023.

Chatbots use artificial intelligence to generate streams of text from simple or more elaborate prompts. They don't "copy" text from the Internet (so "plagiarism" is hard to claim) but create based on the data they have been given. The results have been so good that educators have warned it could lead to widespread cheating and even signal the end of traditional classroom teaching methods.

Lately, I see more sober articles about the use of AI and more articles about teachers including lessons on the ethical use of AI by students, and on how they are using chatbots to help create their teaching materials. I knew teachers in K-20 who attended faculty workshops this past summer to try to figure out what to do in the fall.

Report coverThe U.S. Department of Education recently issued a report on its perspective on AI in education. It includes a warning of sorts: Don’t let your imagination run wild. “We especially call upon leaders to avoid romancing the magic of AI or only focusing on promising applications or outcomes, but instead to interrogate with a critical eye how AI-enabled systems and tools function in the educational environment,” the report says.

Some of the ideas are unsurprising. For example, it stresses that humans should be placed “firmly at the center” of AI-enabled edtech. That's also not surprising since an earlier White House “blueprint for AI,” said the same thing. And an approach to pedagogy that has been suggested for several decades - personalized learning - might be well served by AI. Artificial assistants might be able to automate tasks, giving teachers time for interacting with students. AI can give instant feedback to students "tutor-style." 

The report's optimism appears in the idea that AI can help teachers rather than diminish their roles and provide support. Still, where AI will be in education in the next year or next decade is unknown.