Tay: A Cautionary Tale of AI

chatbot and postsTay was a chatbot originally released by Microsoft Corporation as a Twitter bot on March 23, 2016. It "has had a great influence on how Microsoft is approaching AI," according to Satya Nadella, the CEO of Microsoft.

Tay caused almost immediate controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. According to Microsoft, this was caused by trolls who "attacked" the service as the bot made replies based on its interactions with people on Twitter - a dangerous proposition.

It was named "Tay" as an acronym for "thinking about you." It was said that it was similar to or based on Xiaoice, a similar Microsoft project in China that Ars Technica reported that it had "more than 40 million conversations apparently without major incident".

Interestingly, Tay was designed to mimic the language patterns of a 19-year-old American girl and presented as "The AI with zero chill."

It was quickly abused with Twitter users began tweeting politically incorrect phrases, teaching it inflammatory messages so that the bot began releasing racist and sexually-charged messages in response to other Twitter users.

One artificial intelligence researcher, Roman Yampolskiy, commented that Tay's misbehavior was understandable because it mimicked the deliberately offensive behavior of other Twitter users, and Microsoft had not given the bot an understanding of inappropriate behavior. He compared the issue to IBM's Watson, which began to use profanity after reading entries from the website Urban Dictionary.

It was popular in its short life. Within 16 hours of its release, Tay had tweeted more than 96,000 times, That is when Microsoft suspended the account for "adjustments." Microsoft confirmed that Tay had been taken offline, released an apology on its official blog, and said it would "look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

Then on March 30, 2016, Microsoft accidentally re-released the bot on Twitter while testing it. Given its freedom, Tay released some drug-related tweets, then it became stuck in a repetitive loop of tweeting "You are too fast, please take a rest", several times a second. The posts appeared in the feeds of 200,000+ Twitter followers.

Tay has become a cautionary tale on the responsibilities of creators for their AI.

In December 2016, Microsoft released Tay's successor, a chatbot named Zo which was an English version of Microsoft's other successful chatbots Xiaoice (China) and Rinna [ja] (Japan).

So You Want To Be An AI Prompt Engineer

AI prompt engineerWhen I was teaching in a high school, I used to tell students (and faculty) that we were not preparing them for jobs. I was sure many of our students would end up in jobs with titles that did not exist then. There is a song by The Byrds from the 1960s titled "So You Wanna Be a Rock 'n' Roll Star." In 2024, it could be "So You Want To Be An AI Prompt Engineer."

The role of AI prompt engineer attracted attention for its high-six-figure salaries when it emerged in early 2023. What does this job entail? The principal aim is to help a company integrate AI into its operations. Some people describe the job as more prompter than engineer.

There are already tools that work with apps like OpenAI’s ChatGPT platform that can automate the writing process using sets of built-in prompts. Does that mean that AI will replace AI prompt engineers already? For now, the prompter works to ensure that users get the desired results. They might also be the instructors for other employees on how to use generative AI tools. They become the AI support team. AI can automate "trivial" tasks and make more time for work that requires creative thinking.

What kind of training leads to getting this job? You might think a background in computer science, but probably a strong language and writing ability is more important. People who write in the corporate world might justifiably fear AI will take their jobs away. Being a prompter might be an alternative.

Still, I suspect that there is a good possibility that a prompter/engineer's job might be vulnerable as software becomes better at understanding users’ prompts.

If you are interested in being an AI prompt engineer, I posted last week about some free online courses offered by universities and tech companies that included three courses that relate to creating prompts for AI.

AI Applications and Prompt Engineering is an edX introductory course on prompt engineering that starts with the basics and ends with creating your applications.

Prompt Engineering for ChatGPT is a specific 6-module course from Vanderbilt University (through Coursera) that offers beginners a starting point for writing better prompts.

Another course on ChatGPT Prompt Engineering for Developers is offered by OpenAI in collab with DeepLearning and it is taught by Isa Fulford and Andrew Ng.  It covers best practices and includes hands-on practice. 

Learning AI - Free College-Level Courses

online student

If you are interested in taking some free AI courses offered by Google, Harvard, and others, here are 8 you might consider on a variety of approaches. For Coursera courses without the trial, go to the course you want to take and click 'Enroll for free', then 'Audit the course'. You'll need to create an account to take courses, but won't need to pay anything.

Google offers 5 different courses to learn generative AI from the ground up. Start with an Introduction to AI and finish having an understanding of AI as a whole.  https://lnkd.in/eW5k4DVz

Microsoft offers an AI course that covers the basics and more. Start with an introduction and continue learning about neural networks and deep learning.  https://lnkd.in/eKJ9qmEQ

Introduction to AI with Python from Harvard University (edX) is a full 7-week course to explore the concepts and algorithms of AI. It starts with the technologies behind AI and ends with knowledge of AI principles and machine learning libraries.  https://lnkd.in/g4Sbb3nQ

LLMOps are Large Language Model Ops offered by Google Cloud in collaboration with DeepLearning. Taught by Erwin Huizenga, it goes through the LLMOps pipeline of pre-processing training data and adapt a supervised tuning pipeline to train and deploy a custom LLM.

Big Data, Artificial Intelligence, and Ethics is a 4-module course offered by Coursera from the University of California - Davis that covers big data and introduces IBM's Watson as well as learning about big data opportunities and knowing the limitations of AI. I think the inclusion of ethics is an important element.

AI Applications and Prompt Engineering is an edX introductory course on prompt engineering that starts with the basics and ends with creating your applications.

Prompt Engineering for ChatGPT is a specific 6-module course from Vanderbilt University (through Coursera) that offers beginners a starting point for writing better prompts.

Another course on ChatGPT Prompt Engineering for Developers is offered by OpenAI in collab with DeepLearning and it is taught by Isa Fulford and Andrew Ng.  It covers best practices and includes hands-on practice. 

Terms of Service

those confusing terms of serviceTerms of service. That information you tend to avoid reading. Good example: Google's newly updated terms of service, which I found out about in an email last week. I decided to read them.

Their updated terms opens with "We know it’s tempting to skip these Terms of Service, but it’s important to establish what you can expect from us as you use Google services, and what we expect from you. These Terms of Service reflect the way Google’s business works, the laws that apply to our company, and certain things we’ve always believed to be true. As a result, these Terms of Service help define Google’s relationship with you as you interact with our services."

Here are a few items I noted:
Some things considered to be abuse on the part of users includes accessing or using Google services or content in fraudulent or deceptive ways, such as:
phishing
creating fake accounts or content, including fake reviews
misleading others into thinking that generative AI content was created by a human
providing services that appear to originate from you (or someone else) when they actually originate from us
providing services that appear to originate from us when they do not
using our services (including the content they provide) to violate anyone’s legal rights, such as intellectual property or privacy rights
reverse engineering our services or underlying technology, such as our machine learning models, to extract trade secrets or other proprietary information, except as allowed by applicable law
using automated means to access content from any of our services in violation of the machine-readable instructions on our web pages (for example, robots.txt files that disallow crawling, training, or other activities)
hiding or misrepresenting who you are in order to violate these terms
providing services that encourage others to violate these terms

Take that second item I highlighted about misleading others into thinking that generative AI content was created by a human, Does that mean that if I use their generative AI or some other provider's AI to help write a blog post that I put here with my name that I am violating their terms of service?

Though I would say that Google's Terms of Service is written in plain langauage that most readers should be able to understand, the implications of some of the terms are much harder to interpret.

NOTE: The Google Terms of Service (United States version) that I reference are effective May 22, 2024.
View
Archived versions and  Download a PDF