So You Want To Be An AI Prompt Engineer

AI prompt engineerWhen I was teaching in a high school, I used to tell students (and faculty) that we were not preparing them for jobs. I was sure many of our students would end up in jobs with titles that did not exist then. There is a song by The Byrds from the 1960s titled "So You Wanna Be a Rock 'n' Roll Star." In 2024, it could be "So You Want To Be An AI Prompt Engineer."

The role of AI prompt engineer attracted attention for its high-six-figure salaries when it emerged in early 2023. What does this job entail? The principal aim is to help a company integrate AI into its operations. Some people describe the job as more prompter than engineer.

There are already tools that work with apps like OpenAI’s ChatGPT platform that can automate the writing process using sets of built-in prompts. Does that mean that AI will replace AI prompt engineers already? For now, the prompter works to ensure that users get the desired results. They might also be the instructors for other employees on how to use generative AI tools. They become the AI support team. AI can automate "trivial" tasks and make more time for work that requires creative thinking.

What kind of training leads to getting this job? You might think a background in computer science, but probably a strong language and writing ability is more important. People who write in the corporate world might justifiably fear AI will take their jobs away. Being a prompter might be an alternative.

Still, I suspect that there is a good possibility that a prompter/engineer's job might be vulnerable as software becomes better at understanding users’ prompts.

If you are interested in being an AI prompt engineer, I posted last week about some free online courses offered by universities and tech companies that included three courses that relate to creating prompts for AI.

AI Applications and Prompt Engineering is an edX introductory course on prompt engineering that starts with the basics and ends with creating your applications.

Prompt Engineering for ChatGPT is a specific 6-module course from Vanderbilt University (through Coursera) that offers beginners a starting point for writing better prompts.

Another course on ChatGPT Prompt Engineering for Developers is offered by OpenAI in collab with DeepLearning and it is taught by Isa Fulford and Andrew Ng.  It covers best practices and includes hands-on practice. 

Can Bloom's Taxonomy Teach Us Anything About AI?

spiral model
Image gettingsmart.com

 

When I was studying to be a secondary school teacher, Bloom’s Taxonomy often came up in my classes as a way to do lesson planning and a way to assess learners. Recently, there have been several revisions to its pyramid stack. An article on www.gettingsmart.com suggests a spiral might be better, particularly if you want to use it as a lens to view AI.

The author, Vriti Saraf, opines that the most important potential of AI isn’t to enhance human productivity, it’s to enhance and support human thinking, and that looking at AI’s capabilities through the lens of Bloom’s Taxonomy showcases the possible interplay of humans and machines.

It is an interesting idea. Take a look.

 

 

Telling Students to Use AI

grading

2023 was certainly a year for AI. In education, some teachers avoided it and some embraced it, perhaps reluctantly at first. Some educators have reacted, partially to AI that can write essays Some schools, some teachers, some school districts some colleges some departments have tried to ban it issues. Of course, that is impossible, just as it was impossible to ban the use of Wikipedia or going back to the previous century, the use of a word processor, or a calculator in a math class, or use the Internet to copy and paste information.

What happened when an entire class of college students were told to use ChatGPT to write their essays?

Chris Howell, an adjunct assistant professor of religious studies at Elon University, noticed more and more suspiciously chatbot-esque prose popping up in student papers. So rather than trying to police the tech, he embraced it. He assigned students to generate an essay entirely with ChatGPT and then critique it themselves.

When I first caught students attempting to use ChatGPT to write their essays, it felt like an inevitability. My initial reaction was frustration and irritation—not to mention gloom and doom about the slow collapse of higher education—and I suspect most educators feel the same way. But as I thought about how to respond, I realized there could be a teaching opportunity. Many of these essays used sources incorrectly, either quoting from books that did not exist or misrepresenting those that did. When students were starting to use ChatGPT, they seemed to have no idea that it could be wrong.

I decided to have each student in my religion studies class at Elon University use ChatGPT to generate an essay based on a prompt I gave them and then “grade” it. I had anticipated that many of the essays would have errors, but I did not expect that all of them would. Many students expressed shock and dismay upon learning the AI could fabricate bogus information, including page numbers for nonexistent books and articles. Some were confused, simultaneously awed and disappointed. Others expressed concern about the way overreliance on such technology could induce laziness or spur disinformation and fake news. Closer to the bone were fears that this technology could take people’s jobs. Students were alarmed that major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks.

The assignment satisfied my goal, which was to teach them that ChatGPT is neither a functional search engine nor an infallible writing tool.

Source  wired.com/story/dont-want-students-to-rely-on-chatgpt-have-them-use-it/

Detecting AI-Written Content

chatbotWhen chatGPT hit academia hard at the start of this year, there was much fear from teachers at all grade levels. I saw articles and posts saying it would be the end of writing. A Princeton University student built an app that helps detect whether a text was written by a human being or using an artificial intelligence tool like ChatGPT. Edward Tian was a senior computer science major. He has said that the algorithm behind his app, GPTZero, can "quickly and efficiently detect whether an essay is ChatGPT or human written."

GPTZero is at gptzero.me. I was able to attend an online demo of the app now that it has been released as a free and paid product, and also communicated with Tian.

Because ChatGPT has exploded in popularity, it has gotten interest from investors. The Wall Street Journal reported that parent company OpenAI could attract investments valuing it at $29 billion. But the app has also raised fears that students are using the tool to cheat on writing assignments.

GPTZero examines two variables in any piece of writing it examines. It looks at a text's "perplexity," which measures its randomness: Human-written texts tend to be more unpredictable than bot-produced work. It also examines "burstiness," which measures variance, or inconsistency, within a text because there is a lot of variance in human-generated writing.

Unlike other tools, such as Turnitin.com, the app does not tell you the source of the writing. That is because of the odd situation that writing produced by a chatbot isn't exactly from any particular source.

There are other tools to detect AI writing - see https://www.pcmag.com/how-to/how-to-detect-chatgpt-written-text

Large language models themselves can be trained to spot AI-generated writing if they were trained on two sets of text. One text would be AI and the other written by people, so theoretically you could teach the model to recognize and detect AI writing.