AI and Bias

Bias has always existed. It has always existed online. Now, with AI, there is another level of bias.

Bias generated by technology is “more than a glitch,” says one expert.

For example, why does AI have a bias against dark skin? It is because its data is scraped from the Internet, and the Internet is full of biased content.

This doesn't give AI a pass on bias. It is more of a comment or reflection on bias in general.

Harmful Content Online

girl on phone

Photo by Andrea Piacquadio

It is an important issue to cover but, unfortunately, I am not surprised to see a report covered with a BBC headline "More girls than boys exposed to harmful content online."

Teenage girls are more likely to be asked for nude photos online or be sent pornography or content promoting self-harm than boys, a report has found. The report is based on survey responses from around 6,500 young people, and they found that girls are "much more likely to experience something nasty or unpleasant online."

YouTube, WhatsApp, Snapchat, and TikTok were the most popular social media sites for both age groups, but more than three-quarters of 14-18-year-olds also used Instagram.

Many respondents reported spending significant amounts of time online. For instance, a third of 14-18-year-olds reported spending four hours or more online during a school day.  Almost two-thirds reported spending more than four hours online at weekends. One in five 14-18-year-olds said they spent more than seven hours a day online on weekends.

One example is that one in five children and young people who took part in the research said something nasty or unpleasant had recently happened to them online. The most common experience was that "mean or nasty comments" were made about them or sent to them. But there was a difference between boys and girls when it came to the type of nasty online experience they had. Girls were more likely to have mean or nasty comments made about them or rumors spread about them.

More than 5% of girls aged 14-18 said they had been asked to send nude photos or videos online or expose themselves, three times higher than the rate among boys. More than 5% of 14-18 year-old girls also said they had seen or been sent pornography, and twice as many girls as boys reported being sent "inappropriate photos" they had not asked for. More girls than boys also reported being sent content promoting suicide, eating disorders and self-harm.

China Regulating Generative AI Use

Chinese regulators have released draft rules designed to manage how companies develop generative artificial intelligence products like ChatGPT.
The CAC's (Cyberspace Administration of China) draft measures lay out ground rules that generative AI services have to follow, including the type of content these products are allowed to generate.

One rule is that content generated by AI needs to reflect the core values of socialism and should not subvert state power. The rules are the first of their kind in the country. China is not the only country concerned with the development of generative AI. Italy banned ChatGPT in March citing privacy concerns.

Chinese technology giants Baidu and Alibaba have launched their own ChatGPT-type applications. Alibaba unveiled Tongyi Qianwen and Baidu launched its Ernie Bot.

Though some people fear AI, others will fear restrictions and rules governing tech development. I am cautious on both of those issues but some of the CAC rules seem reasonable. For example, requiring that the data being used to train these AI models will not discriminate against people based on things like ethnicity, race, and gender,

These measures are scheduled to come into effect later this year. China already has regulations around data protection and algorithm development.


ChatGPT - That AI That Is All Over the News

Dear ChatGPTSo far, the biggest AI story of 2023 - at least in the education world - is ChatGPT. Chances are you have heard of it. If you have been under a rock or buried under papers you have to grade, ChatGPT is Chat Generative Pre-trained Transformer. ChatGPT is the newest iteration of the chatbot that was launched by OpenAI in late 2022.

OpenAI has a whole GPT-3 family of large language models. It has gotten attention for its detailed responses and articulate answers across many domains of knowledge. But in Educationland, the buzz is that it will allow students to use it to write all their papers. The first article someone sent me had a title like "The End of English Classes."

People started to test it out and there were both reactions of amazement at how good it worked, and also criticisms of very uneven factual accuracy.

Others have written about all the issues in great detail and so I don't need to go into great detail here, but I do want to summarize some things that have emerged in the few months it has been in use with the public, and provide some links to further inquiry.

  • Currently, you can get a free user account at  I was hesitant at first to register because it required giving a mobile phone number and I don't need to get more spam phone calls but I finally created an account so I could do some testing (more on that in my next post)
  • OpenAI is a San Francisco-based company doing AI research and deployment and states that their mission is "to ensure that artificial general intelligence benefits all of humanity."
  • "Open" may be a misnomer in that the software is not open in the sense of open source and the chatbot will not be free forever.
  • ChatGPT can write essays, and articles and even come up with poems, scripts and answer math questions or write code - all with mixed results.
  • AI chatbots have been around for quite a while. You probably have used one online to ask support questions and tools like Siri, Alexa, and others are a version of this. I had high school students making very crude versions of chatbots back in the last century based on an early natural language processing program called Eliza that had been written in the mid-1960s at MIT.
  • Schools have been dealing with student plagiarism since there have been schools, but this AI seems to take it to a new level since OpenAI claims that the content the bot produces is not copied but that the bot generates text based on the patterns it learned in the training data.
  • This may be a good thing for AI in general or further fuel fears of an "AI takeover." You can find more optimistic stories about how AI is shaping the future of healthcare. It can accurately and quickly analyze medical tests and find connections between patient symptoms, lifestyle, drug interactions, etc.
  • I also see predictions that as AI makes the once humans-only skill of writing automated that our verbal skills will carry more weight.

You can write to me at serendipty35blog at with your thoughts about these chatbot AI programs or any issues where tech and education cross paths for better or worse.


Forbes says that ChatGPT And AI Will Fuel New EdTech Boom because venture capitalists predict artificial intelligence, virtual reality and video-learning startups will dominate the space in 2023.

This opinion piece compares ChatGPT to the COVID pandemic!

The New York Times podcast, The Daily, did an episode that included tests of the bot. Listen on Apple Podcasts

A teacher friend posted on his blog a reaction to the idea that ChatGPT is the death of the essay. he says "And here's my point with regard to artificial intelligence: if students are given the chance and the encouragement to write in their own voices about what really matters to them, what possible reason would they have for wanting a robot to do that work for them?  It's not about AI signaling the death of writing. It's about giving students the chance to write about things they care enough about not to cheat."

OpenAI is not alone in this AI approach. Not to be outdone, Google announced its own Bard, and Microsoft also has a new AI that can do some scary audio tricks.

People are already creating "gotcha" tools to detect things written by ChatGPT.

I found a lesson for teachers about how to use ChatGPT with students. Here is a result of asking ChatGPT to write a lesson plan on how teachers can use ChatGPT with students.

Do You Own Your Face Online?

Image: Clker-Free-Vector-Images from Pixabay

Who owns the rights to my face? I assumed it was me until I read an article that reminded me that when we create social media accounts, we pretty much agree to grant those platforms a free license to use our content as they wish.

In most cases, you hold the copyright to any content you upload to social media platforms. But when you created your account on Facebook, Twitter, Instagram, Tik Tok, or any platform you agreed to have a free license to use your content as they wish. How can they use it> It depends, but did you read the user agreement or just click "continue?"

How would you feel if you saw one of your tweets used in a Twitter ad campaign? Violated? Angry? Excited? Feel as you wish, but don't expect any cut of the ad's revenue.

In that article, a person sees a sponsored Instagram Story ad with a video of a person putting on lip balm. The person was her. She watched herself apply the balm and smile at the camera, but Abby never agreed to appear in a nationwide social campaign. How is this possible?

Usage rights dictate who owns an image or asset. It determines how and where it’s allowed to appear, and for how long.

The author had worked in media and knew that employees are often "pressured" to appear in campaigns but it is not a part of the full-time job and it is likely that it will go uncompensated. 

In this case, she had been told to participate in a photoshoot demonstrating the product’s healing benefits. She recorded for the work day, was not paid, and she believed the campaign was only going to run on the employer’s social media accounts for a few months. But this was more than a year later. Probably her former employer passed the content to the skincare company, though without her permission.

There's an old saying that if you're not paying for a product, then you are the product. Social media sites like Facebook and Instagram are completely free to use for the average consumer because advertisers pay for your attention (and sometimes your data). This is not a new model. In commercial TV broadcasting, you watch content for free because there are commercials. A more cynical explanation is that you pay for the privilege of having yourself sold. You are consumed. You are the product. They deliver you to the advertiser,. The advertiser is their customer.

Think about that the next time you read - or choose not to read - the terms and conditions and agree with a click.


This article is also crossposted at One-Page Schoolhouse

Parental Control of Technology

kids on tech
Photo by Andrea Piacquadio

As the new school year begins for all students this week, a series titled "Parental Control" appears from Mozilla (Firefox) about ways to empower parents for some technology challenges. That sounds like a good thing, but particularly when it applies to schools, parental control has cons along with pros.

Many digital platforms offer parental control settings. The most common and most popular allows parents to shield young people from “inappropriate” content. Restricting "mature content" and what is "inappropriate" takes us into a controversial area. Who defines what should be restricted? Mozilla says that "the way platforms identify what that means is far from perfect."

YouTube has apologized after its family-friendly “Restricted Mode” recently blocked videos by gay, bisexual and transgender creators, sparking complaints from users. Restricted Mode is an optional parental-control feature that users can activate to avoid content that’s been flagged by an algorithm.

That example takes me back to the earliest days of the Internet in K-12 schools when filters would block searches for things like "breast cancer" because "breast" was on the list of blocked words.

Limiting screen time is another strategy and is within a parent's control but is certainly controversial within a family. Kids don't like their screen time to be limited.

Mozilla actually had questions for itself about what to call the series. They quote Jenny Radesky, an MD and Associate Professor of Pediatrics-Developmental/Behavioral at the University of Michigan, as saying that “Parental mediation is [a better] term, parental engagement is another – and probably better because it implies meaningful discussion or involvement to help kids navigate media, rather than using controlling or restricting approaches.” She pointed to research that suggests letting children manage their own media consumption may be more effective than parental control settings offered by apps.

The internet has risks, but so do parental controls. Many kids in the LGBTQI+ community can be made vulnerable by tech monitoring tools.

Sensitive information about young people can be exposed to teachers and campus administrators through the school devices they use.

As parents and eductaors, we want to protect students, especially the youngest ones. We als want to, as a society, instill in younger generations why privacy matters.


Electronic Frontier Foundation