AI and Bias

Bias has always existed. It has always existed online. Now, with AI, there is another level of bias.

Bias generated by technology is “more than a glitch,” says one expert.

For example, why does AI have a bias against dark skin? It is because its data is scraped from the Internet, and the Internet is full of biased content.

This doesn't give AI a pass on bias. It is more of a comment or reflection on bias in general.

Jobs and Bots

chatgpt on phone
Workers are already using bots to help them work. Will that AI replace them?

On the same day, I saw three articles about artificial intelligence that made me view AI in different ways. One article was about how a chatbot powered by the Internet has passed exams at a U.S. law school after writing essays on law topics. Another article was about a company that is developing AI for warfare, but said they would only sell it to "democratic nations." The third article was about how AI makes the translation of difficult "dead" languages as well as interpreting medical tests faster and more accurately. 

Jonathan Choi, a professor at Minnesota University Law School, gave ChatGPT the same test faced by students. It had 95 multiple-choice questions and 12 essay questions. He reported that the bot scored a C+ overall.

In my own essay testing, I have found that the bot can produce in seconds a "C" paper or the start of a better paper. It is impressive but it is not like a really good student's work. So far.

But many of the AI bot stories in the media are about jobs that are likely to be replaced by AI. One popular media story at supposes that computer programmers and people doing administrative work that they term "mid-level writing" can be handled by AI. That latter category would include work like writing emails, human resources letters, producing advertising copy, and drafting press releases. Of course, there is always the possibility that a worker doing that could be freed from those tasks and put onto higher level tasks and actually benefit from the AI.

I have seen positive and negative results from using AI in media work and law. Some of the negative examples seem to me to be when the user expects too much from AI at this stage in its development.

I don't think we know today what AI and bots will change in the world of work by next year, but it is certainly an area that requires concern by individuals and those who can affect the broader culture.

Can Generative AI Build Me a Website?

Photo by ThisIsEngineering on

Artificial Intelligence has gained very widespread attention in the past six months even among people who consider themselves to be not very tech-savvy. chatGPT and its clones have received much of the attention but the AI floodgate opened wide. So wide that people became fearful and the government became interested in possibly restricting its growth in the U.S. and other countries.

Google introduced a tool to help you write. Grammarly, the writing assistenat that checks your writing, now has a feature to help you write too. Before we put a pause on AI growth, I want to consider how it is already being used in building websites.

You may know that AI can write or revise the code behind websites and applications. I won't comment on that because it's not my strongest area. One of the problems I always encounter when starting on a new website with a client is content readiness. Writing website copy should be something that a client is intimately involved in doing. I'm okay with editing content but prefer clients to write their own initial copy as much as possible. Generative AI technology can draft surprisingly high-quality marketing copy.

I build and maintain some sites using Squarespace and they have integrated generative AI technology into the platform. It is used in their rich text editor, which powers all website text, providing you with predictive text. As with other chat tools, you write a prompt and the AI will generate a draft of copy that you can insert into the text block with a single click.

AI isn't building an actual website quite yet, but no doubt it will one day. And you still need humans feeding the content to it, checking it over and placing it in a design frame. Platforms like Squarespace, WordPress, WIX, et al, have made building a site much easier, but all those platforms will get more intelligent in the next year. Artificial combined with human intelligence will hopefully still provide the best designs.

Are You Tired of Hearing About AI Yet?

I ask "Are You Tired of Hearing About AI Yet?" but the question is rhetorical because whether you answer Yes or No, AI is still going to be big news for the foreseeable future.

finger pointingThis month some tech big shots were summoned to the principal's office - well, the White House -  and told they must protect the public from the dangers of Artificial Intelligence (AI). Sundar Pichai of Google, Satya Nadella of Microsoft, and OpenAI's Sam Altmann were told they had a "moral" duty to safeguard society and that the administration may decide to regulate the sector further.

AI products like ChatGPT and Bard have gone mainstream and interacting with "generative AI", which was once the domain of computer scientists, is now something kids are doing. It is writing student assignments by summarizing information from multiple sources, debugging computer code, writing presentations, and even taking a shot at poetry. Some of it reads believably human-generated. Some does not. But it does it in seconds.

Altman of OpenAI commented that in terms of regulation, executives were "surprisingly on the same page on what needs to happen."

I'm sure it came up in the conversations that earlier that week, the "godfather" of AI, Geoffrey Hinton, quit his job at Google - saying he now regretted his work. Then again, he is 75, so "quit" might also be called "retired."

Google Will 'Help Me Write'

Google recently introduced a new feature to their Workplace suite that they call "Help Me Write." This generative AI will first appear in Gmail and Google Docs. At the moment, it's available to a select audience of invited testers.

Like other generative AI, you will be able to enter a prompt and have a first draft created. for you.,An example Google shared is not having it write a paper for your English class, though it will probably be able to do that. They show the example of having it create a job description for a regional sales representative/

It's another AI tool that might frighten teachers because it seems to help students unfairly but I think this may be a misperception. As with other AI tools, such as the much-discussed chat GPT, I think the best thing educators can do is to introduce this to students and guide them in the ways that it can be best used and best used legitimately.

The evolution of digital literacy in classrooms will never end. Yes, these kinds of AI- assisted-writing tools present boyj opportunities and challenges for educators. But ignoring them or trying to ban them from student use is certainly not the solution. This tool and others like it are an opportunity to improve student writing skills and critical thinking. 

Google Announcements

       Google Demo

China Regulating Generative AI Use

Chinese regulators have released draft rules designed to manage how companies develop generative artificial intelligence products like ChatGPT.
The CAC's (Cyberspace Administration of China) draft measures lay out ground rules that generative AI services have to follow, including the type of content these products are allowed to generate.

One rule is that content generated by AI needs to reflect the core values of socialism and should not subvert state power. The rules are the first of their kind in the country. China is not the only country concerned with the development of generative AI. Italy banned ChatGPT in March citing privacy concerns.

Chinese technology giants Baidu and Alibaba have launched their own ChatGPT-type applications. Alibaba unveiled Tongyi Qianwen and Baidu launched its Ernie Bot.

Though some people fear AI, others will fear restrictions and rules governing tech development. I am cautious on both of those issues but some of the CAC rules seem reasonable. For example, requiring that the data being used to train these AI models will not discriminate against people based on things like ethnicity, race, and gender,

These measures are scheduled to come into effect later this year. China already has regulations around data protection and algorithm development.